Search Results for "sourceforge.net/projects/winpython/files/winpython_3.8/3.8.10.0/winpython64-3.8.10.0.7z"

30 projects for "sourceforge.net/projects/winpython/files/winpython_3.8/3.8.10.0/winpython64-3.8.10.0.7z" with 2 filters applied:

  • 1
    MDCx

    MDCx

    Movie metadata scraper and organizer for media libraries and NFO

    MDCx is an open source media metadata scraping and organization tool designed to automate the process of collecting detailed information for movie files. It retrieves metadata from multiple online sources and applies it to local media collections, helping users maintain structured and well-organized libraries. MDCx can download information such as titles, cast data, artwork, and other metadata, then generate standardized NFO files compatible with media management systems. It also supports image processing tasks such as downloading and cropping artwork used by media centers. ...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 2
    fess

    fess

    Open source enterprise search server for websites, files, and data

    ...Fess is built on top of OpenSearch and offers an integrated solution for crawling, indexing, and searching documents from websites, file systems, and various data stores. Fess includes a built-in crawler that can collect content from sources such as databases, CSV files, and shared storage, making it suitable for centralized knowledge discovery. It supports indexing and searching across many document formats including office documents, PDFs, and compressed archives. It also provides a web-based administrative interface that allows administrators to configure crawling targets, manage indexing tasks, and adjust search settings from a graphical dashboard.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 3
    goclone

    goclone

    Fast CLI tool for cloning entire websites for local browsing offline

    goclone is a command-line utility designed to download and mirror complete websites to a local directory for offline access. It retrieves HTML pages, stylesheets, JavaScript files, images, and other assets from a target site and stores them on the user’s computer. It preserves the original site’s structure by maintaining relative links between pages, allowing the mirrored copy to function similarly to the live version when opened locally. Once a site has been cloned, users can browse the pages offline and navigate between them as if they were viewing the site online. goclone is written in Go and leverages concurrency through Go routines to perform downloads efficiently. goclone can also optionally start a local web server to serve the mirrored files for a more realistic browsing experience. ...
    Downloads: 21 This Week
    Last Update:
    See Project
  • 4
    Maxun

    Maxun

    Small event-delegation library for decoupling event binding and handli

    Maxun named JsAction by Google serves as a lightweight event delegation library built in JavaScript. It allows developers to separate the logic of binding events from the code that handles those events, helping to keep DOM event wiring cleaner and more maintainable. It is archived and marked as read-only, indicating that the project is no longer actively maintained or intended for production use. The README states that ongoing development has migrated into a larger framework under the...
    Downloads: 47 This Week
    Last Update:
    See Project
  • 5
    diskover-community

    diskover-community

    Open source file indexing & storage analytics powered by Elasticsearch

    ...Diskover also helps identify outdated or unused files, duplicate data, and inefficient storage usage that can waste resources or increase operational costs. A Python-based indexing engine performs the scanning and indexing tasks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    bilibili-manga-downloader

    bilibili-manga-downloader

    Download and manage Bilibili Manga chapters with GUI downloader

    BiliBili-Manga-Downloader is an open source desktop application designed to download manga chapters from the Bilibili Manga platform for offline reading and local management. It was created to address limitations of the web reading experience, such as intrusive advertisements, inconvenient image zooming, and inconsistent navigation during reading sessions. It provides a graphical user interface that allows users to search for manga titles using keywords, view detailed information about...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 7
    tumblr-crawler

    tumblr-crawler

    Python crawler to download photos and videos from Tumblr blogs

    ...Users can specify one or multiple blogs to crawl by editing a configuration file or by passing parameters through the command line. Once executed, the script fetches media from the Tumblr API and stores the downloaded files in folders named after each blog. tumblr-crawler avoids re-downloading files that have already been saved, making repeated runs safe and useful for recovering missing media. It also supports optional proxy configuration, which can help when access to Tumblr content requires routing requests through a proxy server. With simple dependencies and straightforward configuration, the project offers a practical way to archive media content from Tumblr blogs.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    videodl

    videodl

    Lightweight Python tool for downloading videos from many platforms

    Videodl is a lightweight video downloader implemented entirely in Python that allows users to retrieve videos from a wide range of online media platforms. It focuses on providing a fast and simple way to parse video pages and download media files, often prioritizing high-definition versions without watermarks when available. It supports numerous video platforms across both Chinese and international streaming ecosystems, enabling users to fetch content from many popular services through a unified interface. Videodl works by implementing platform-specific client modules that extract video information and download links from supported services. ...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 9
    pandora-box

    pandora-box

    Lightweight cross-platform desktop client for managing Mihomo proxies

    Pandora-Box is a lightweight desktop client designed to provide a graphical interface for the Mihomo proxy core. It allows users to manage proxy configurations and subscriptions through a simple and user-friendly interface rather than working directly with configuration files. Pandora-Box supports multiple proxy protocols and provides tools to organize and control network routing rules. It is designed to work for both casual users who want an easy setup and advanced users who need more control over proxy behavior. It also supports automatic rule grouping and features such as TUN mode to enable system-wide proxy routing. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 10
    douyin

    douyin

    Open source Douyin crawler for collecting and downloading public data

    DouyinCrawler is an open source data collection tool designed to gather publicly available information from the Douyin platform. It demonstrates how to build a Python-based web crawler combined with a graphical interface and command line functionality. It allows users to collect data from various types of Douyin content, including user profiles, videos, hashtags, and music pages. DouyinCrawler supports both automated scraping and batch operations to process multiple targets efficiently. It...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 11
    Python API for JMComic

    Python API for JMComic

    Python crawler and API for downloading JMComic albums and images

    ...Its architecture includes components for configuration management, download orchestration, and client communication, allowing users to automate the retrieval of manga chapters or entire albums. It includes command-line functionality and configuration files so users can customize download behavior, directory structures, and performance settings without modifying code. It also supports plugin-based extensions that allow additional processing.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    Weibo Crawler

    Weibo Crawler

    Python crawler for collecting and downloading Sina Weibo user data

    weibo-crawler is a Python-based data collection tool designed to retrieve information from Sina Weibo user accounts. It automates the process of gathering posts, user profile details, and engagement metrics from one or more target accounts. weibo-crawler can extract comprehensive information about users, including profile attributes such as nickname, follower count, following count, and account metadata. It also captures detailed data about each post, including the content, publishing time,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    news-please

    news-please

    Python tool for crawling and extracting structured data from news site

    ...Developers can use the software either as a standalone command line application or integrate it into their own Python applications through its library interface. Extracted article data can be stored in different formats and systems, including JSON files or database-backed storage solutions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14

    twitch-batch-downloader

    Automate the download of entire Twitch.tv channels

    ...Save each Twitch video into its own folder, with date and time values, video ID, stream metadata, frame screenshot, .ts parts list and sha256 hash. Keep the original ts files and generate mp4 files from them. It requires a shell and some command line utilities. See README.md for details in the Code/git section.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 15
    python-fxxk-spider

    python-fxxk-spider

    Collection of 100+ Python web scraping projects and crawler examples

    ...Because websites frequently change their structure, some included projects may require adjustments before they can run successfully. It is designed as a long-term, continuously updated list of practical crawler implementations that developers can study, modify, and adapt to their own scraping tasks. It also highlights the importance of legal and responsible use of web scraping technologies.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    bilili

    bilili

    Command-line Bilibili video and danmaku downloader with batch support

    ...It provides automated downloading capabilities that handle video streams and associated data efficiently while minimizing manual interaction. bilili supports retrieving both the video files and danmaku comments, which are the scrolling overlay comments commonly associated with the platform’s videos. These danmaku comments can be automatically converted into ASS subtitle format for playback compatibility with media players. bilili also implements multi-threaded and segmented downloading techniques to improve download performance and reliability. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    dirhunt

    dirhunt

    Web crawler that finds hidden web directories without brute force

    ...Instead of sending large numbers of guess-based requests, it operates as a specialized crawler that intelligently explores websites to identify accessible or hidden directories. Dirhunt can detect directories that expose “Index Of” listings, which may reveal files and other resources that were not intended to be publicly visible. It can also identify situations where directories are intentionally hidden through empty index files or servers that return misleading responses such as fake 404 errors. Dirhunt processes HTML pages and other available sources to discover additional paths and directories while minimizing the number of requests sent to the server, making scans faster and less intrusive. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 18
    Easyspider - Distributed Web Crawler

    Easyspider - Distributed Web Crawler

    Easy Spider is a distributed Perl Web Crawler Project from 2006

    Easy Spider is a distributed Perl Web Crawler Project from 2006. It features code from crawling webpages, distributing it to a server and generating xml files from it. The client site can be any computer (Windows or Linux) and the Server stores all data. Websites that use EasySpider Crawling for Article Writing Software: https://www.artikelschreiber.com/en/ https://www.unaique.net/en/ https://www.unaique.com/ https://www.artikelschreiben.com/ https://www.buzzerstar.com/ https://easyperlspider.sourceforge.io/ https://www.sebastianenger.com/ https://www.artikelschreiber.com/opensource/ It is fun to look at some code that is few years ago and to see how one has improved himself. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    crawlergo

    crawlergo

    Headless Chrome crawler for collecting URLs for vulnerability scans

    ...It also automatically fills and submits forms, helping discover hidden routes or parameters that might otherwise be missed by traditional crawlers. crawlergo includes a built-in URL de-duplication system that removes repeated or pseudo-static links while maintaining fast crawling speeds for large websites. crawlergo also analyzes page content to extract links and resources from multiple sources, including JavaScript files, comments, and configuration files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    grab-site

    grab-site

    Web crawler for archiving and backing up sites into WARC archives

    grab-site is an open source web crawling tool designed to archive and back up websites by recursively downloading their content. It works by taking a starting URL and systematically following links across the site, capturing pages and resources and saving them into WARC archive files for long-term preservation. Internally, the crawler uses a fork of the wpull engine to fetch and process web pages efficiently during large-scale crawls. grab-site includes a built-in dashboard that displays real-time crawl activity, including which URLs are currently being processed and how many remain in the queue. Users can dynamically apply ignore patterns during an active crawl, allowing them to skip problematic or unnecessary URLs that could slow down or block the archiving process. grab-site also provides predefined ignore sets for common site structures such as forums and other complex web platforms. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    pspider

    pspider

    Simple Python framework for building multithreaded web crawlers

    ...By organizing crawling tasks into structured stages, PSpider allows developers to build scalable spiders while keeping the codebase relatively compact and readable. Its modular design also makes it easier to extend the framework with additional features or integrate it into existing Python projects.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    appcrawler

    appcrawler

    Automated mobile app crawler and testing tool built on Appium

    ...AppCrawler works by traversing the interface structure of an application and executing predefined or dynamically discovered actions on clickable components. Its behavior can be customized using configuration files that define traversal rules, element selection logic, and specific actions triggered by conditions encountered during testing. AppCrawler supports rule-based filtering such as blacklists and whitelists to control which elements are explored and which are ignored.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    instagram-profilecrawl

    instagram-profilecrawl

    Instagram profile crawler that extracts posts, tags, and stats

    ...The collected data can include profile metadata, post details, engagement metrics, and commenter activity, allowing users to analyze account behavior or monitor profile growth over time. It also provides scripts for downloading images from crawled profiles and logging statistics into CSV files for tracking metrics like followers, likes, and comments. Authentication is optional, meaning the crawler can access public profile data without logging in.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    ruia

    ruia

    Async Python framework for fast and flexible web scraping spiders

    ...Ruia follows a “write less, run faster” philosophy, emphasizing concise code and streamlined spider development. It provides a structured approach to building scraping projects through components such as data items, spiders, middleware, and plugins. Developers can define structured fields to extract information from HTML content and process responses asynchronously to improve crawling performance. It also supports middleware and plugin systems that allow customization of request handling, response processing, and additional functionality.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 25
    bt-btt

    bt-btt

    Guide and resources for accessing and using the U3C3 BitTorrent site

    ...It explains how BitTorrent and magnet link downloads operate, including the role of trackers and distributed hash table (DHT) networks in locating peers and downloading files. BT-btt also discusses different ways users can search for torrent resources, including strategies for improving search results when dealing with multiple language variants or different character encodings. Additional documentation explains why certain statistics such as active download counts may not always be accurate when DHT-based downloads are used.
    Downloads: 5 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB