Search Results for "sourceforge.net/projects/winpython/files/winpython_3.8/3.8.10.0/winpython64-3.8.10.0.7z" - Page 2

Showing 55 open source projects for "sourceforge.net/projects/winpython/files/winpython_3.8/3.8.10.0/winpython64-3.8.10.0.7z"

View related business solutions
  • ServiceDesk Plus, a world-class IT and enterprise service management platform Icon
    ServiceDesk Plus, a world-class IT and enterprise service management platform

    Design, automate, deliver, and manage critical IT and business services

    Best in class online service desk software. Offer your customers world-class services with ServiceDesk Plus Cloud, the easy-to-use SaaS service desk software from ManageEngine, the IT management division of Zoho. Track and manage IT tickets efficiently, resolve issues faster, and ensure end-user satisfaction with the cloud-based IT ticketing system used by over 100,000 IT service desks worldwide. Manage the complete life cycle of IT incidents, problems, changes, and projects with out of the box ITIL workflows. Create support SLAs, define escalation levels, and ensure compliance. Automate ticket dispatch, categorization, classification, and assignment based on predefined business rules, and set up notifications and alerts for timely ticket resolution. Reduce walk ins and unnecessary tickets by giving your users more control. Enable end users to access IT services through your service catalog in the self-service portal. Help users create and track tickets and search for solutions.
    Learn More
  • Cloud-hosted construction project information management for improved communication, and increased efficiency. Icon
    Cloud-hosted construction project information management for improved communication, and increased efficiency.

    Ideal for on-premise project information management.

    Newforma empowers over 4M professionals and 1,500 AECO firms worldwide by revolutionizing Project Information Management. We transform vast amounts of project data into a meticulously organized, easily accessible, and fully searchable resource—all from a single, centralized platform. From pre-construction to years after completion, Newforma ensures you have the critical information you need at every stage of your projects.
    Learn More
  • 1
    Gerapy

    Gerapy

    Distributed Crawler Management Framework Based on Scrapy

    Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Someone who has worked as a crawler with Python may use Scrapy. Scrapy is indeed a very powerful crawler framework. It has high crawling efficiency and good scalability. It is basically a necessary tool for developing crawlers using Python. If you use Scrapy as a crawler, then of course we can use our own host to crawl when crawling, but when the crawl is very large, we can’t...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Ayakashi

    Ayakashi

    The next generation web scraping framework

    The next-generation web scraping framework. The web has changed. Gone are the days when raw HTML parsing scripts were the proper tool for the job. Javascript and single-page applications are now the norms. Demand for data scraping and automation is higher than ever, from business needs to data science and machine learning. Our tools need to evolve. Ayakashi helps you build scraping and automation systems that are easy to build simple or sophisticated, highly performant, maintainable, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3

    dorker-py

    Descubre archivos, rutas escondidas realizando busquedas avanzadas

    Dorking Google - Dorker Py Descubre archivos, rutas escondidas realizando busquedas avanzadas (ES) Discover files, hidden paths by performing advanced searches (EN)
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    ScrapBot 1.40 64bits

    ScrapBot 1.40 64bits

    Task automation software for accessing and manipulating website data.

    ScrapBot is a task automation software that allows you to access, authenticate, extract, and insert data on any website. The software utilizes JavaScript to execute tasks, eliminating the need for server or additional software installations. The system can control the accessed webpage through JavaScript, and the entire navigation can be viewed in the program window. The main.js script runs in a separate frame from the navigation frame but can access all page content without any restrictions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Native Teams: Payments and Employment for International Teams Icon
    Native Teams: Payments and Employment for International Teams

    Expand Your Global Team in 85+ Countries

    With Native Teams’ Employer of Record (EOR) service, you can compliantly hire in 85+ countries without setting up a legal entity. From dedicated employee support and localised benefits to tax optimisation, we help you build a global team that feels truly cared for.
    Learn More
  • 5
    Easyspider - Distributed Web Crawler

    Easyspider - Distributed Web Crawler

    Easy Spider is a distributed Perl Web Crawler Project from 2006

    Easy Spider is a distributed Perl Web Crawler Project from 2006. It features code from crawling webpages, distributing it to a server and generating xml files from it. The client site can be any computer (Windows or Linux) and the Server stores all data. Websites that use EasySpider Crawling for Article Writing Software: https://www.artikelschreiber.com/en/ https://www.unaique.net/en/ https://www.unaique.com/ https://www.artikelschreiben.com/ https://www.buzzerstar.com/ https://easyperlspider.sourceforge.io/ https://www.sebastianenger.com/ https://www.artikelschreiber.com/opensource/ It is fun to look at some code that is few years ago and to see how one has improved himself. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Scrapyd

    Scrapyd

    A service daemon to run Scrapy spiders

    Scrapyd can manage multiple projects and each project can have multiple versions uploaded, but only the latest one will be used for launching new spiders. A common (and useful) convention to use for the version name is the revision number of the version control tool you’re using to track your Scrapy project code. For example: r23. The versions are not compared alphabetically but using a smarter algorithm (the same packaging uses) so r10 compares greater to r9, for example.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    crawlergo

    crawlergo

    Headless Chrome crawler for collecting URLs for vulnerability scans

    ...It also automatically fills and submits forms, helping discover hidden routes or parameters that might otherwise be missed by traditional crawlers. crawlergo includes a built-in URL de-duplication system that removes repeated or pseudo-static links while maintaining fast crawling speeds for large websites. crawlergo also analyzes page content to extract links and resources from multiple sources, including JavaScript files, comments, and configuration files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Web Spider, Web Crawler, Email Extractor

    Web Spider, Web Crawler, Email Extractor

    Free Extracts Emails, Phones and custom text from Web using JAVA Regex

    In Files there is WebCrawlerMySQL.jar which supports MySql Connection Please follow this link to get latest version https://sourceforge.net/projects/web-spider-web-crawler-extract/ Free Web Spider & Crawler. Extracts Information from Web by parsing millions of pages. Store data into Derby OR MySQL Database and data are not being lost after force closing the spider
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9

    webotron

    Using industrial automation techniques for creating web scraping tools

    Industry uses machines that can easily maim or kill their operators and is also used in very adverse environments. In spite of this, production quality must be close to perfect without reliance on operator skill or attentiveness. Control programs must be robust, yet simple enough to be understood and maintained by non programmer skilled trades like electricians . The main programming model is the PLC which implements double buffering and an event loop. The most advanced production model...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Instant Remote Support Software. Unattended Remote Access Software. Icon
    Instant Remote Support Software. Unattended Remote Access Software.

    Zoho Assist, your all-in-one remote access solution, helps you to access and manage remote devices.

    Zoho Assist is cloud-based remote support and remote access software that helps you support customers from a distance through web-based, on-demand remote support sessions. Set up unattended remote access and manage remote PCs, laptops, mobile devices, and servers effortlessly. A few seconds is all you need to establish secure connections to offer your customers remote support solutions.
    Learn More
  • 10
    grab-site

    grab-site

    Web crawler for archiving and backing up sites into WARC archives

    grab-site is an open source web crawling tool designed to archive and back up websites by recursively downloading their content. It works by taking a starting URL and systematically following links across the site, capturing pages and resources and saving them into WARC archive files for long-term preservation. Internally, the crawler uses a fork of the wpull engine to fetch and process web pages efficiently during large-scale crawls. grab-site includes a built-in dashboard that displays real-time crawl activity, including which URLs are currently being processed and how many remain in the queue. Users can dynamically apply ignore patterns during an active crawl, allowing them to skip problematic or unnecessary URLs that could slow down or block the archiving process. grab-site also provides predefined ignore sets for common site structures such as forums and other complex web platforms. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    pspider

    pspider

    Simple Python framework for building multithreaded web crawlers

    ...By organizing crawling tasks into structured stages, PSpider allows developers to build scalable spiders while keeping the codebase relatively compact and readable. Its modular design also makes it easier to extend the framework with additional features or integrate it into existing Python projects.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    appcrawler

    appcrawler

    Automated mobile app crawler and testing tool built on Appium

    ...AppCrawler works by traversing the interface structure of an application and executing predefined or dynamically discovered actions on clickable components. Its behavior can be customized using configuration files that define traversal rules, element selection logic, and specific actions triggered by conditions encountered during testing. AppCrawler supports rule-based filtering such as blacklists and whitelists to control which elements are explored and which are ignored.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    instagram-profilecrawl

    instagram-profilecrawl

    Instagram profile crawler that extracts posts, tags, and stats

    ...The collected data can include profile metadata, post details, engagement metrics, and commenter activity, allowing users to analyze account behavior or monitor profile growth over time. It also provides scripts for downloading images from crawled profiles and logging statistics into CSV files for tracking metrics like followers, likes, and comments. Authentication is optional, meaning the crawler can access public profile data without logging in.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    GoSpider

    GoSpider

    Gospider - Fast web spider written in Go

    GoSpider - Fast web spider written in Go. Fast web crawling. Brute force and parse sitemap.xml. Parse robots.txt. Generate and verify link from JavaScript files. Link Finder. Find AWS-S3 from response source. Find subdomains from the response source. Get URLs from Wayback Machine, Common Crawl, Virus Total, Alien Vault. Format output easy to Grep. Support Burp input. Crawl multiple sites in parallel.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 15
    ruia

    ruia

    Async Python framework for fast and flexible web scraping spiders

    ...Ruia follows a “write less, run faster” philosophy, emphasizing concise code and streamlined spider development. It provides a structured approach to building scraping projects through components such as data items, spiders, middleware, and plugins. Developers can define structured fields to extract information from HTML content and process responses asynchronously to improve crawling performance. It also supports middleware and plugin systems that allow customization of request handling, response processing, and additional functionality.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 16
    bt-btt

    bt-btt

    Guide and resources for accessing and using the U3C3 BitTorrent site

    ...It explains how BitTorrent and magnet link downloads operate, including the role of trackers and distributed hash table (DHT) networks in locating peers and downloading files. BT-btt also discusses different ways users can search for torrent resources, including strategies for improving search results when dealing with multiple language variants or different character encodings. Additional documentation explains why certain statistics such as active download counts may not always be accurate when DHT-based downloads are used.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 17
    proxypool

    proxypool

    Proxy crawler that aggregates, tests, and serves usable proxy nodes

    ...After collecting these nodes, proxypool processes them by removing duplicates and verifying whether each node is functional. proxypool then provides a usable list of proxy nodes that have passed availability checks. proxypool supports several popular proxy protocols, allowing it to work with multiple types of proxy infrastructures. The behavior of the crawler and the sources it scans can be configured through configuration files, enabling users to customize how nodes are gathered and maintained. It also supports scheduled crawling to continuously update the proxy list and keep the pool current with newly discovered nodes.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 18
    ECommerceCrawlers

    ECommerceCrawlers

    Collection of Python ecommerce and website crawler examples projects

    ECommerceCrawlers is a collection of practical Python web crawler projects designed to gather data from a variety of ecommerce platforms, websites, and online services. It aggregates many independent crawler examples created by contributors and organized into separate subprojects that target specific sites or data sources. These examples demonstrate how to build and operate web scrapers capable of collecting structured information such as product listings, news content, job postings, social media data, and other publicly available web data. ...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 19
    GitGet

    GitGet

    Ever wanted to download only a part of a Git repository.

    Ever wanted to download only a part of a Git repository. Just paste the URL of the repo you want to download and sit back and enjoy. This simple java application makes use of Web Scraping and downloads only those files you need, thus helping you save your precious bandwidth and space.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    crawler4j

    crawler4j

    Open source web crawler for Java

    ...This class decides which URLs should be crawled and handles the downloaded page. shouldVisit function decides whether the given URL should be crawled or not. In the above example, this example is not allowing .css, .js and media files and only allows pages within ics domain. visit function is called after the content of a URL is downloaded successfully. You can easily get the url, text, links, html, and unique id of the downloaded page. You should also implement a controller class which specifies the seeds of the crawl, the folder in which intermediate crawl data should be stored and the number of concurrent threads.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    DSTK - DataScience ToolKit

    DSTK - DataScience ToolKit

    DSTK - DataScience ToolKit for All of Us

    DSTK - DataScience ToolKit is an opensource free software for statistical analysis, data visualization, text analysis, and predictive analytics. Newer version and smaller file size can be found at: https://sourceforge.net/projects/dstk3/ It is designed to be straight forward and easy to use, and familar to SPSS user. While JASP offers more statistical features, DSTK tends to be a broad solution workbench, including text analysis and predictive analytics features. Of course you may specify JASP for advanced data editing and RapidMiner for advanced prediction modeling. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    Save For Offline

    Save For Offline

    Android app for saving webpages for offline reading

    ...Save For Offline is an Android app for saving full web pages for offline reading, with lots of features and options. In you web browser selects 'Share', and then 'Save For Offline'. Saves real HTML files which can be opened in other apps/devices. Download & save entire web pages with all assets for offline reading & viewing. Save HTML files in a custom directory. Save in the background, no need to wait for it to finish saving. Night mode, with both a dark theme, and can invert colors when viewing pages (White becomes black and vice-versa). ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23

    PGBuild

    Compile your mobile web pages into mobile aps via build.phonegap.com

    ...The spider is controlled by a project file that sets the rules for the spider and the options for the phonebap build service. You may create and manage your phonegap project source files manually on your webserver or use PGBuild to connect to a CMS system to extract content. PGBuild is managed from a small widget that you may use your self or integrate into a CMS system.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    ItSucks
    This project is a java web spider (web crawler) with the ability to download (and resume) files. It is also highly customizable with regular expressions and download templates. All backend functionalities are also available in a separate library.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 25
    arachnode.net is an open source Web crawler for downloading, indexing and storing Internet content including e-mail addresses, files, hyperlinks, images, and Web pages and is written in C# using SQL Server 2008. See http://arachnode.net for the LATEST.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB