Data Science Tools for Linux

View 17 business solutions
  • AestheticsPro Medical Spa Software Icon
    AestheticsPro Medical Spa Software

    Our new software release will dramatically improve your medspa business performance while enhancing the customer experience

    AestheticsPro is the most complete Aesthetics Software on the market today. HIPAA Cloud Compliant with electronic charting, integrated POS, targeted marketing and results driven reporting; AestheticsPro delivers the tools you need to manage your medical spa business. It is our mission To Provide an All-in-One Cutting Edge Software to the Aesthetics Industry.
    Learn More
  • The Most Powerful Software Platform for EHSQ and ESG Management Icon
    The Most Powerful Software Platform for EHSQ and ESG Management

    Addresses the needs of small businesses and large global organizations with thousands of users in multiple locations.

    Choose from a complete set of software solutions across EHSQ that address all aspects of top performing Environmental, Health and Safety, and Quality management programs.
    Learn More
  • 1
    sadsa

    sadsa

    SADSA (Software Application for Data Science and Analytics)

    SADSA (Software Application for Data Science and Analytics) is a Python-based desktop application designed to simplify statistical analysis, machine learning, and data visualization for students, researchers, and data professionals. Built using Python for the GUI, SADSA provides a menu-driven interface for handling datasets, applying transformations, running advanced statistical tests, machine learning algorithms, and generating insightful plots — all without writing code.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    AWS SDK for pandas

    AWS SDK for pandas

    Easy integration with Athena, Glue, Redshift, Timestream, Neptune

    aws-sdk-pandas (formerly AWS Data Wrangler) bridges pandas with the AWS analytics stack so DataFrames flow seamlessly to and from cloud services. With a few lines of code, you can read from and write to Amazon S3 in Parquet/CSV/JSON/ORC, register tables in the AWS Glue Data Catalog, and query with Amazon Athena directly into pandas. The library abstracts efficient patterns like partitioning, compression, and vectorized I/O so you get performant data lake operations without hand-rolling boilerplate. It also supports Redshift, OpenSearch, and other services, enabling ETL tasks that blend SQL engines and Python transformations. Operational helpers handle IAM, sessions, and concurrency while exposing knobs for encryption, versioning, and catalog consistency. The result is a productive workflow that keeps your analytics in Python while leveraging AWS-native storage and query engines at scale.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    AWS Step Functions Data Science SDK

    AWS Step Functions Data Science SDK

    For building machine learning (ML) workflows and pipelines on AWS

    The AWS Step Functions Data Science SDK is an open-source library that allows data scientists to easily create workflows that process and publish machine learning models using Amazon SageMaker and AWS Step Functions. You can create machine learning workflows in Python that orchestrate AWS infrastructure at scale, without having to provision and integrate the AWS services separately. The best way to quickly review how the AWS Step Functions Data Science SDK works is to review the related example notebooks. These notebooks provide code and descriptions for creating and running workflows in AWS Step Functions Using the AWS Step Functions Data Science SDK. In Amazon SageMaker, example Jupyter notebooks are available in the example notebooks portion of a notebook instance. To run the AWS Step Functions Data Science SDK example notebooks locally, download the sample notebooks and open them in a working Jupyter instance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Downloads: 0 This Week
    Last Update:
    See Project
  • Cortex: Boost Developer Coding Skills Icon
    Cortex: Boost Developer Coding Skills

    Cortex makes coding easier and faster for developers. See how our portal connects tools and cuts busywork.

    Cortex is a simple portal that helps developers work smarter by linking all your tools, setting clear rules, and slashing repetitive tasks. It speeds up onboarding, updates old code, and fixes issues fast. Over 100 big companies use it to save time and get better results.
    Try it now!
  • 5
    Amazon SageMaker Examples

    Amazon SageMaker Examples

    Jupyter notebooks that demonstrate how to build models using SageMaker

    Welcome to Amazon SageMaker. This projects highlights example Jupyter notebooks for a variety of machine learning use cases that you can run in SageMaker. If you’re new to SageMaker we recommend starting with more feature-rich SageMaker Studio. It uses the familiar JupyterLab interface and has seamless integration with a variety of deep learning and data science environments and scalable compute resources for training, inference, and other ML operations. Studio offers teams and companies easy on-boarding for their team members, freeing them up from complex systems admin and security processes. Administrators control data access and resource provisioning for their users. Notebook Instances are another option. They have the familiar Jupyter and JuypterLab interfaces that work well for single users, or small teams where users are also administrators. Advanced users also use SageMaker solely with the AWS CLI and Python scripts using boto3 and/or the SageMaker Python SDK.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Awesome Fraud Detection Research Papers

    Awesome Fraud Detection Research Papers

    A curated list of data mining papers about fraud detection

    A curated list of data mining papers about fraud detection from several conferences.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    ClearML

    ClearML

    Streamline your ML workflow

    ClearML is an open source platform that automates and simplifies developing and managing machine learning solutions for thousands of data science teams all over the world. It is designed as an end-to-end MLOps suite allowing you to focus on developing your ML code & automation, while ClearML ensures your work is reproducible and scalable. The ClearML Python Package for integrating ClearML into your existing scripts by adding just two lines of code, and optionally extending your experiments and other workflows with ClearML powerful and versatile set of classes and methods. The ClearML Server storing experiment, model, and workflow data, and supports the Web UI experiment manager, and ML-Ops automation for reproducibility and tuning. It is available as a hosted service and open source for you to deploy your own ClearML Server. The ClearML Agent for ML-Ops orchestration, experiment and workflow reproducibility, and scalability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Cookiecutter Data Science

    Cookiecutter Data Science

    Project structure for doing and sharing data science work

    A logical, reasonably standardized, but flexible project structure for doing and sharing data science work. When we think about data analysis, we often think just about the resulting reports, insights, or visualizations. While these end products are generally the main event, it's easy to focus on making the products look nice and ignore the quality of the code that generates them. Because these end products are created programmatically, code quality is still important! And we're not talking about bikeshedding the indentation aesthetics or pedantic formatting standards, ultimately, data science code quality is about correctness and reproducibility. It's no secret that good analyses are often the result of very scattershot and serendipitous explorations. Tentative experiments and rapidly testing approaches that might not work out are all part of the process for getting to the good stuff, and there is no magic bullet to turn data exploration into a simple, linear progression.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9

    DEPRECATED - KVFinder

    Cavity Detection PyMOL plugin

    The KVFinder software, originally published in 2014, is deprecated. We published more recent software: parKVFinder and pyKVFinder. [parKVFinder] A Linux/macOS version is available in this GitHub repository, https://github.com/LBC-LNBio/parKVFinder, while a Windows version is in this GitHub repository, https://github.com/LBC-LNBio/parKVFinder-win. Please read and cite the original paper ParKVFinder: A thread-level parallel approach in biomolecular cavity detection (10.1016/j.softx.2020.100606). [pyKVFinder] pyKVFinder is available in this Python Package Index (PyPI) repository, https://pypi.org/project/pyKVFinder and this GitHub repository, https://github.com/LBC-LNBio/pyKVFinder. Please read and cite the original paper pyKVFinder: an efficient and integrable Python package for biomolecular cavity detection and characterization in data science (10.1186/s12859-021-04519-4).
    Downloads: 0 This Week
    Last Update:
    See Project
  • Ango Hub | All-in-one data labeling platform Icon
    Ango Hub | All-in-one data labeling platform

    For AI teams and Computer Vision team in organizations of all size

    AI-Assisted features of the Ango Hub will automate your AI data workflows to improve data labeling efficiency and model RLHF, all while allowing domain experts to focus on providing high-quality data.
    Learn More
  • 10
    Data Science Notes

    Data Science Notes

    Curated collection of data science learning materials

    Data Science Notes is a large, curated collection of data science learning materials, with explanations, code snippets, and structured notes across the typical end-to-end workflow. It spans foundational math and statistics through data wrangling, visualization, machine learning, and practical project organization. The content emphasizes hands-on understanding by pairing narrative notes with runnable examples, making it useful for both self-study and classroom settings. Because it aggregates topics in one place, learners can move linearly or jump into specific areas as needed during projects. The notes also highlight common pitfalls and good practices, which helps beginners adopt professional habits early. It’s a living resource that many students consult when revising fundamentals or exploring adjacent tools in the ecosystem.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Deep Learning course

    Deep Learning course

    Slides and Jupyter notebooks for the Deep Learning lectures

    Slides and Jupyter notebooks for the Deep Learning lectures at Master Year 2 Data Science from Institut Polytechnique de Paris. This course is being taught at as part of Master Year 2 Data Science IP-Paris. Note: press "P" to display the presenter's notes that include some comments and additional references. This lecture is built and maintained by Olivier Grisel and Charles Ollion.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Deep Learning with PyTorch

    Deep Learning with PyTorch

    Latest techniques in deep learning and representation learning

    This course concerns the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. The prerequisites include DS-GA 1001 Intro to Data Science or a graduate-level machine learning course. To be able to follow the exercises, you are going to need a laptop with Miniconda (a minimal version of Anaconda) and several Python packages installed. The following instruction would work as is for Mac or Ubuntu Linux users, Windows users would need to install and work in the Git BASH terminal. JupyterLab has a built-in selectable dark theme, so you only need to install something if you want to use the classic notebook interface.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    DeepLearningProject

    DeepLearningProject

    An in-depth machine learning tutorial

    This tutorial tries to do what most Most Machine Learning tutorials available online do not. It is not a 30 minute tutorial that teaches you how to "Train your own neural network" or "Learn deep learning in under 30 minutes". It's a full pipeline which you would need to do if you actually work with machine learning - introducing you to all the parts, and all the implementation decisions and details that need to be made. The dataset is not one of the standard sets like MNIST or CIFAR, you will make you very own dataset. Then you will go through a couple conventional machine learning algorithms, before finally getting to deep learning! In the fall of 2016, I was a Teaching Fellow (Harvard's version of TA) for the graduate class on "Advanced Topics in Data Science (CS209/109)" at Harvard University. I was in charge of designing the class project given to the students, and this tutorial has been built on top of the project I designed for the class.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    FlexiList.

    FlexiList.

    FlexiList is a Java data structure that combines the benefits of array

    FlexiList is a Java data structure that combines the benefits of arrays and linked lists. Like an array, it allows for efficient access to elements by index. Like a linked list, it allows for efficient insertion and deletion of elements at any position in the list. Benefits Over Arrays and ArrayList ->Efficient Insertion and Deletion: FlexiList can insert or delete nodes at any position in the list in O(1) time, whereas arrays require shifting all elements after the insertion or deletion point. ->Dynamic Size: FlexiList can grow or shrink dynamically as elements are added or removed, whereas arrays have a fixed size. ->Good Memory Locality: FlexiList nodes are stored in a contiguous block of memory, making it more cache-friendly than arrays. ->Faster Insertion and Deletion: FlexiList can insert or delete nodes at any position in the list in O(1) time, whereas ArrayList requires shifting all elements after the insertion or deletion point.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Forecasting Best Practices

    Forecasting Best Practices

    Time Series Forecasting Best Practices & Examples

    Time series forecasting is one of the most important topics in data science. Almost every business needs to predict the future in order to make better decisions and allocate resources more effectively. This repository provides examples and best practice guidelines for building forecasting solutions. The goal of this repository is to build a comprehensive set of tools and examples that leverage recent advances in forecasting algorithms to build solutions and operationalize them. Rather than creating implementations from scratch, we draw from existing state-of-the-art libraries and build additional utilities around processing and featuring the data, optimizing and evaluating models, and scaling up to the cloud. The examples and best practices are provided as Python Jupyter notebooks and R markdown files and a library of utility functions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    MCPower

    MCPower

    MCPower — simple Monte Carlo power analysis for complex models

    MCPower-GUI is a desktop application that provides a graphical interface for the MCPower Monte Carlo power analysis library. It guides users through the full workflow across three tabs: Model setup (formula input with live parsing, CSV data upload with auto-detected variable types, effect size sliders, and correlation editing), Analysis configuration (find power for a given sample size or find the minimum sample size for a target power, with multiple testing correction and scenario analysis), and Results (interactive charts, exportable tables, and auto-generated Python replication scripts). Supports both standard linear models and mixed-effects models. Additional features include analysis history, configurable scenarios, and built-in documentation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    ML workspace

    ML workspace

    All-in-one web-based IDE specialized for machine learning

    All-in-one web-based development environment for machine learning. The ML workspace is an all-in-one web-based IDE specialized for machine learning and data science. It is simple to deploy and gets you started within minutes to productively built ML solutions on your own machines. This workspace is the ultimate tool for developers preloaded with a variety of popular data science libraries (e.g., Tensorflow, PyTorch, Keras, Sklearn) and dev tools (e.g., Jupyter, VS Code, Tensorboard) perfectly configured, optimized, and integrated. Usable as remote kernel (Jupyter) or remote machine (VS Code) via SSH. Easy to deploy on Mac, Linux, and Windows via Docker. Jupyter, JupyterLab, and Visual Studio Code web-based IDEs.By default, the workspace container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler allows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    NuzeBot

    NuzeBot

    Finds interesting news headlines.

    This is a bot to finds the news you want to see. It can be made to find the news that interests you and reject everything else. View on one page the most interesting headlines from many websites.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19

    OGLDataScienceTool

    Opengl tool for data science visualization

    Data visualization tool written in LWJGL Compatible with libgdx and other opengl wrappers The project depends on apache poi, and apache commons, for office files support Planned features for next release: * reading json, and other nosql data structures * jdbc connection for creating dataframes * data heatmaps, and additional plots for questions, contact me kumar.santhi1982@hotmail.com more details: http://www.java-gaming.org/topics/ds/41920/view.html http://datascienceforindia.com/
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Orchest

    Orchest

    Build data pipelines, the easy way

    Code, run and monitor your data pipelines all from your browser! From idea to scheduled pipeline in hours, not days. Interactively build your data science pipelines in our visual pipeline editor. Versioned as a JSON file. Run scripts or Jupyter notebooks as steps in a pipeline. Python, R, Julia, JavaScript, and Bash are supported. Parameterize your pipelines and run them periodically on a cron schedule. Easily install language or system packages. Built on top of regular Docker container images. Creation of multiple instances with up to 8 vCPU & 32 GiB memory. A free Orchest instance with 2 vCPU & 8 GiB memory. Simple data pipelines with Orchest. Each step runs a file in a container. It's that simple! Spin up services whose lifetime spans across the entire pipeline run. Easily define your dependencies to run on any machine. Run any subset of the pipeline directly or periodically.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    PySyft

    PySyft

    Data science on data without acquiring a copy

    Most software libraries let you compute over the information you own and see inside of machines you control. However, this means that you cannot compute on information without first obtaining (at least partial) ownership of that information. It also means that you cannot compute using machines without first obtaining control over those machines. This is very limiting to human collaboration and systematically drives the centralization of data, because you cannot work with a bunch of data without first putting it all in one (central) place. The Syft ecosystem seeks to change this system, allowing you to write software which can compute over information you do not own on machines you do not have (total) control over. This not only includes servers in the cloud, but also personal desktops, laptops, mobile phones, websites, and edge devices. Wherever your data wants to live in your ownership, the Syft ecosystem exists to help keep it there while allowing it to be used privately.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Recommenders

    Recommenders

    Best practices on recommendation systems

    The Recommenders repository provides examples and best practices for building recommendation systems, provided as Jupyter notebooks. The module reco_utils contains functions to simplify common tasks used when developing and evaluating recommender systems. Several utilities are provided in reco_utils to support common tasks such as loading datasets in the format expected by different algorithms, evaluating model outputs, and splitting training/test data. Implementations of several state-of-the-art algorithms are included for self-study and customization in your own applications. Please see the setup guide for more details on setting up your machine locally, on a data science virtual machine (DSVM) or on Azure Databricks. Independent or incubating algorithms and utilities are candidates for the contrib folder. This will house contributions which may not easily fit into the core repository or need time to refactor or mature the code and add necessary tests.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    SageMaker Containers

    SageMaker Containers

    Create SageMaker-compatible Docker containers

    Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models. To train a model, you can include your training script and dependencies in a Docker container that runs your training code. A container provides an effectively isolated environment, ensuring a consistent runtime and reliable training process. The SageMaker Training Toolkit can be easily added to any Docker container, making it compatible with SageMaker for training models. If you use a prebuilt SageMaker Docker image for training, this library may already be included. Very often, an entry point needs additional information from the container that is not available in hyperparameters. SageMaker Containers writes this information as environment variables that are available inside the script.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    SageMaker Inference Toolkit

    SageMaker Inference Toolkit

    Serve machine learning models within a Docker container

    Serve machine learning models within a Docker container using Amazon SageMaker. Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models. Once you have a trained model, you can include it in a Docker container that runs your inference code. A container provides an effectively isolated environment, ensuring a consistent runtime regardless of where the container is deployed. Containerizing your model and code enables fast and reliable deployment of your model. The SageMaker Inference Toolkit implements a model serving stack and can be easily added to any Docker container, making it deployable to SageMaker. This library's serving stack is built on Multi Model Server, and it can serve your own models or those you trained on SageMaker using machine learning frameworks with native SageMaker support.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    SageMaker Training Toolkit

    SageMaker Training Toolkit

    Train machine learning models within Docker containers

    Train machine learning models within a Docker container using Amazon SageMaker. Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models. To train a model, you can include your training script and dependencies in a Docker container that runs your training code. A container provides an effectively isolated environment, ensuring a consistent runtime and reliable training process. The SageMaker Training Toolkit can be easily added to any Docker container, making it compatible with SageMaker for training models. If you use a prebuilt SageMaker Docker image for training, this library may already be included. Write a training script (eg. train.py). Define a container with a Dockerfile that includes the training script and any dependencies.
    Downloads: 0 This Week
    Last Update:
    See Project