VMP 1.10.2020
PyTorch, No Tears
Python
Intermediate Python
Python Numpy Tutorial
Библиотека программиста Подборка книг по машинному обучению
Туториал по PyTorch
TheAlgorithms/Python
pytorch.org
PyTorch, No Tears
Pytorch.org tutorials
Ttutorials beginner deep_learning_60min_blitz
Pytorch
Pytorch examples
Pytorch tutorials
github DeepFaceLab
PyTorch, No Tears
Pytorch
yunjey pytorch-tutorial
deeplearningzerotoall PyTorch
zergtant pytorch-handbook
chenyuntc pytorch-book
pytorch_geometric
PyTorch-GAN
MorvanZhou PyTorch-Tutorial
ГЛУБОКОЕ ОБУЧЕНИЕ
Книга «Программируем с PyTorch: Создание приложений глубокого обучения»
Как подружить PyTorch и C++. Используем TorchScript
TorchScriptTutorial
Deep_Learning_for_Vision_Systems_by_Mohamed_Elgendy_z_lib_org.pdf
The uWSGI project
Введение в WSGI-серверы: Часть первая
Flask Documentation (1.1.x)
Настройка mod_wsgi (Apache) для Flask
WSGI Servers Full Stack Python
Нейронные сети и компьютерное зрение – 66 урок. Строим первую нейронную сеть
Нейронные сети и компьютерное зрение – 75 урок. Классификация в PyTorch
PyTorch-YOLOv3
YOLOv3-in-PyTorch
yolov3
PyTorch documentation
torch.nn
Convolution Layers
Conv2d
A Beginner’s Guide To Understanding Convolutional Neural Networks 1
A Beginner’s Guide To Understanding Convolutional Neural Networks Part 2
CS231n: Convolutional Neural Networks for Visual Recognition
Understanding of Convolutional Neural Network (CNN) — Deep Learning
Padding and Stride
Нейронные сети и компьютерное зрение – 1 урок
Нейронные сети и компьютерное зрение – 103 урок.Свёртка, каскад свёрток
Нейронные сети и компьютерное зрение – 104 урок.Свёртка, каскад свёрток
Нейронные сети и компьютерное зрение – 105 урок.Свёртка, каскад свёрток
Нейронные сети и компьютерное зрение – 110 урок. Архитектура LeNet (1998)
Нейронные сети и компьютерное зрение – 116 – 124 урок. AlexNet (2012) и VGG (2014)
Нейронные сети и компьютерное зрение – 125-130 урок. GoogLeNet и ResNet (2015)
Нейронные сети и компьютерное зрение – 131 урок. Распознавание рукописных чисел свёрточной сетью
torch.nn.Conv2d(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T, T]], stride: Union[T, Tuple[T, T]] = 1, padding: Union[T, Tuple[T, T]] = 0, dilation: Union[T, Tuple[T, T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = ‘zeros’)
Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Pytorch-how-and-when-to-use-Module-Sequential-ModuleList-and-ModuleDict
>>> # With square kernels and equal stride
>>> m = nn.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = torch.randn(20, 16, 50, 100)
>>> output = m(input)
import torch.nn.functional as F
class MyCNNClassifier(nn.Module):
def __init__(self, in_c, n_classes):
super().__init__()
self.conv1 = nn.Conv2d(in_c, 32, kernel_size=3, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(32)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
self.bn2 = nn.BatchNorm2d(64)
self.fc1 = nn.Linear(64 * 28 * 28, 1024)
self.fc2 = nn.Linear(1024, n_classes)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = F.relu(x)
x = x.view(x.size(0), -1) # flat
x = self.fc1(x)
x = F.sigmoid(x)
x = self.fc2(x)
return x
model = MyCNNClassifier(1, 10)
print(model)
MyCNNClassifier(
(conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(fc1): Linear(in_features=50176, out_features=1024, bias=True)
(fc2): Linear(in_features=1024, out_features=10, bias=True)
)
PyTorch-VAE
implementing-an-autoencoder-in-pytorch
Building Autoencoder in Pytorch
L1aoXingyu pytorch-beginner
kaggle.com autoencoders-with-pytorch
pytorch-beginner 08-AutoEncoder
kaggle Convolutional Autoencoder
Denoising-Autoencoder-in-Pytorch
github.com Autoencoders+pytorch
pytorch mobile flutter
torch_mobile flutter plugin
GitHub PyTorch
Pytorch3d
PyTorch GitHub
PyTorch Geometric
Туториал по PyTorch: от установки до готовой нейронной сети
NVIDIA выпустили обертку над PyTorch для обучения моделей
Какие ошибки чаще всего совершают при обучении нейросетей
StyleGAN2: улучшенная нейросеть для генерации лиц людей
StyleGAN2 — Official TensorFlow Implementation
PyTorch — Краткое руководство 2019
Понимание PyTorch на примере
PyTorch Tutorial: How to Develop Deep Learning Models with Python
torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode=’zeros’)
Parameters
• in_channels (int) – Number of channels in the input image
• out_channels (int) – Number of channels produced by the convolution
• kernel_size (int or tuple) – Size of the convolving kernel
• stride (int or tuple, optional) – Stride of the convolution. (Default: 1)
• padding (int or tuple, optional) – Zero-padding added to both sides of the input (Default: 0)
• padding_mode (string, optional) – zeros
• dilation (int or tuple, optional) – Spacing between kernel elements. (Default: 1)
• groups (int, optional) – Number of blocked connections from input to output channels. (Default: 1)
• bias (bool, optional) – If True, adds a learnable bias to the output. (Default: True)
And this URL has helpful visualization of the process.
So the in_channels in the beginning is 3 for images with 3 channels (colored images). For images black and white it should be 1. Some satellite images should have 4.
The out_channels is what convolution will produce so these are the number of filters.
Let’s create an example to “prove” that.
import torch
import torch.nn as nn
c = nn.Conv2d(1,3, stride = 1, kernel_size=(4,5))
print(c.weight.shape)
print(c.weight)
Out
torch.Size([3, 1, 4, 5])
Нейронные сети и компьютерное зрение – 1 урок
PyTorch at Tesla – Andrej Karpathy, Tesla
Pytorch Bidirectional LSTM example
Популярность PyTorch в среднем выросла на 243% за год
Pytorch & related libraries
- pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration.
- pytorch text: Torch text related contents.
- pytorch-seq2seq: A framework for sequence-to-sequence (seq2seq) models implemented in PyTorch.
- anuvada: Interpretable Models for NLP using PyTorch.
- audio: simple audio I/O for pytorch.
- loop: A method to generate speech across multiple speakers
- fairseq-py: Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
- speech: PyTorch ASR Implementation.
- OpenNMT-py: Open-Source Neural Machine Translation in PyTorch http://opennmt.net
- neuralcoref: State-of-the-art coreference resolution based on neural nets and spaCy huggingface.co/coref
- sentiment-discovery: Unsupervised Language Modeling at scale for robust sentiment classification.
- MUSE: A library for Multilingual Unsupervised or Supervised word Embeddings
- nmtpytorch: Neural Machine Translation Framework in PyTorch.
- pytorch-wavenet: An implementation of WaveNet with fast generation
- Tacotron-pytorch: Tacotron: Towards End-to-End Speech Synthesis.
- AllenNLP: An open-source NLP research library, built on PyTorch.
- PyTorch-NLP: Text utilities and datasets for PyTorch pytorchnlp.readthedocs.io
- quick-nlp: Pytorch NLP library based on FastAI.
- TTS: Deep learning for Text2Speech
- LASER: Language-Agnostic SEntence Representations
- pyannote-audio: Neural building blocks for speaker diarization: speech activity detection, speaker change detection, speaker embedding
- gensen: Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning.
- translate: Translate – a PyTorch Language Library.
- espnet: End-to-End Speech Processing Toolkit espnet.github.io/espnet
- pythia: A software suite for Visual Question Answering
- UnsupervisedMT: Phrase-Based & Neural Unsupervised Machine Translation.
- jiant: The jiant sentence representation learning toolkit.
- BERT-PyTorch: Pytorch implementation of Google AI’s 2018 BERT, with simple annotation
- InferSent: Sentence embeddings (InferSent) and training code for NLI.
- uis-rnn:This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization. arxiv.org/abs/1810.04719
- flair: A very simple framework for state-of-the-art Natural Language Processing (NLP)
- pytext: A natural language modeling framework based on PyTorch fb.me/pytextdocs
- voicefilter: Unofficial PyTorch implementation of Google AI’s VoiceFilter system http://swpark.me/voicefilter
- BERT-NER: Pytorch-Named-Entity-Recognition-with-BERT.
- transfer-nlp: NLP library designed for flexible research and development
- texar-pytorch: Toolkit for Machine Learning and Text Generation, in PyTorch texar.io
- pytorch-kaldi: pytorch-kaldi is a project for developing state-of-the-art DNN/RNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.
- NeMo: Neural Modules: a toolkit for conversational AI nvidia.github.io/NeMo
- pytorch-struct: A library of vectorized implementations of core structured prediction algorithms (HMM, Dep Trees, CKY, ..,)
- espresso: Espresso: A Fast End-to-End Neural Speech Recognition Toolkit
- transformers: huggingface Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch. huggingface.co/transformers
- reformer-pytorch: Reformer, the efficient Transformer, in Pytorch
CV:
- pytorch vision: Datasets, Transforms and Models specific to Computer Vision.
- pt-styletransfer: Neural style transfer as a class in PyTorch.
- OpenFacePytorch: PyTorch module to use OpenFace’s nn4.small2.v1.t7 model
- img_classification_pk_pytorch: Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, …)
- SparseConvNet: Submanifold sparse convolutional networks.
- Convolution_LSTM_pytorch: A multi-layer convolution LSTM module
- face-alignment: ? 2D and 3D Face alignment library build using pytorch adrianbulat.com
- pytorch-semantic-segmentation: PyTorch for Semantic Segmentation.
- RoIAlign.pytorch: This is a PyTorch version of RoIAlign. This implementation is based on crop_and_resize and supports both forward and backward on CPU and GPU.
- pytorch-cnn-finetune: Fine-tune pretrained Convolutional Neural Networks with PyTorch.
- detectorch: Detectorch – detectron for PyTorch
- Augmentor: Image augmentation library in Python for machine learning. http://augmentor.readthedocs.io
- s2cnn: This library contains a PyTorch implementation of the SO(3) equivariant CNNs for spherical signals (e.g. omnidirectional cameras, signals on the globe)
- TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision.
- maskrcnn-benchmark: Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
- image-classification-mobile: Collection of classification models pretrained on the ImageNet-1K.
- medicaltorch: A medical imaging framework for Pytorch http://medicaltorch.readthedocs.io
- albumentations: Fast image augmentation library.
- kornia: Differentiable computer vision library.
- pytorch-text-recognition: Text recognition combo – CRAFT + CRNN.
- facenet-pytorch: Pretrained Pytorch face detection and recognition models ported from davidsandberg/facenet.
- detectron2: Detectron2 is FAIR’s next-generation research platform for object detection and segmentation.
- vedaseg: A semantic segmentation framework by pyotrch
- ClassyVision: An end-to-end PyTorch framework for image and video classification.
- detecto:Computer vision in Python with less than 10 lines of code
- pytorch3d: PyTorch3D is FAIR’s library of reusable components for deep learning with 3D data pytorch3d.org
- MMDetection: MMDetection is an open source object detection toolbox, a part of the OpenMMLab project.
- neural-dream: A PyTorch implementation of the DeepDream algorithm. Creates dream-like hallucinogenic visuals.
- FlashTorch: Visualization toolkit for neural networks in PyTorch!
- Lucent: Tensorflow and OpenAI Clarity’s Lucid adapted for PyTorch.
- MMDetection3D: MMDetection3D is OpenMMLab’s next-generation platform for general 3D object detection, a part of the OpenMMLab project.
- MMSegmentation: MMSegmentation is a semantic segmentation toolbox and benchmark, a part of the OpenMMLab project.
- MMEditing: MMEditing is a image and video editing toolbox, a part of the OpenMMLab project.
- MMAction2: MMAction2 is OpenMMLab’s next generation action understanding toolbox and benchmark, a part of the OpenMMLab project.
- MMPose: MMPose is a pose estimation toolbox and benchmark, a part of the OpenMMLab project.
Probabilistic/Generative Libraries:
- ptstat: Probabilistic Programming and Statistical Inference in PyTorch
- pyro: Deep universal probabilistic programming with Python and PyTorch http://pyro.ai
- probtorch: Probabilistic Torch is library for deep generative models that extends PyTorch.
- paysage: Unsupervised learning and generative models in python/pytorch.
- pyvarinf: Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch.
- pyprob: A PyTorch-based library for probabilistic programming and inference compilation.
- mia: A library for running membership inference attacks against ML models.
- pro_gan_pytorch: ProGAN package implemented as an extension of PyTorch nn.Module.
- botorch: Bayesian optimization in PyTorch
Other libraries:
- pytorch extras: Some extra features for pytorch.
- functional zoo: PyTorch, unlike lua torch, has autograd in it’s core, so using modular structure of torch.nn modules is not necessary, one can easily allocate needed Variables and write a function that utilizes them, which is sometimes more convenient. This repo contains model definitions in this functional way, with pretrained weights for some models.
- torch-sampling: This package provides a set of transforms and data structures for sampling from in-memory or out-of-memory data.
- torchcraft-py: Python wrapper for TorchCraft, a bridge between Torch and StarCraft for AI research.
- aorun: Aorun intend to be a Keras with PyTorch as backend.
- logger: A simple logger for experiments.
- PyTorch-docset: PyTorch docset! use with Dash, Zeal, Velocity, or LovelyDocs.
- convert_torch_to_pytorch: Convert torch t7 model to pytorch model and source.
- pretrained-models.pytorch: The goal of this repo is to help to reproduce research papers results.
- pytorch_fft: PyTorch wrapper for FFTs
- caffe_to_torch_to_pytorch
- pytorch-extension: This is a CUDA extension for PyTorch which computes the Hadamard product of two tensors.
- tensorboard-pytorch: This module saves PyTorch tensors in tensorboard format for inspection. Currently supports scalar, image, audio, histogram features in tensorboard.
- gpytorch: GPyTorch is a Gaussian Process library, implemented using PyTorch. It is designed for creating flexible and modular Gaussian Process models with ease, so that you don’t have to be an expert to use GPs.
- spotlight: Deep recommender models using PyTorch.
- pytorch-cns: Compressed Network Search with PyTorch
- pyinn: CuPy fused PyTorch neural networks ops
- inferno: A utility library around PyTorch
- pytorch-fitmodule: Super simple fit method for PyTorch modules
- inferno-sklearn: A scikit-learn compatible neural network library that wraps pytorch.
- pytorch-caffe-darknet-convert: convert between pytorch, caffe prototxt/weights and darknet cfg/weights
- pytorch2caffe: Convert PyTorch model to Caffemodel
- pytorch-tools: Tools for PyTorch
- sru: Training RNNs as Fast as CNNs (arxiv.org/abs/1709.02755)
- torch2coreml: Torch7 -> CoreML
- PyTorch-Encoding: PyTorch Deep Texture Encoding Network http://hangzh.com/PyTorch-Encoding
- pytorch-ctc: PyTorch-CTC is an implementation of CTC (Connectionist Temporal Classification) beam search decoding for PyTorch. C++ code borrowed liberally from TensorFlow with some improvements to increase flexibility.
- candlegp: Gaussian Processes in Pytorch.
- dpwa: Distributed Learning by Pair-Wise Averaging.
- dni-pytorch: Decoupled Neural Interfaces using Synthetic Gradients for PyTorch.
- skorch: A scikit-learn compatible neural network library that wraps pytorch
- ignite: Ignite is a high-level library to help with training neural networks in PyTorch.
- Arnold: Arnold – DOOM Agent
- pytorch-mcn: Convert models from MatConvNet to PyTorch
- simple-faster-rcnn-pytorch: A simplified implemention of Faster R-CNN with competitive performance.
- generative_zoo: generative_zoo is a repository that provides working implementations of some generative models in PyTorch.
- pytorchviz: A small package to create visualizations of PyTorch execution graphs.
- cogitare: Cogitare – A Modern, Fast, and Modular Deep Learning and Machine Learning framework in Python.
- pydlt: PyTorch based Deep Learning Toolbox
- semi-supervised-pytorch: Implementations of different VAE-based semi-supervised and generative models in PyTorch.
- pytorch_cluster: PyTorch Extension Library of Optimised Graph Cluster Algorithms.
- neural-assembly-compiler: A neural assembly compiler for pyTorch based on adaptive-neural-compilation.
- caffemodel2pytorch: Convert Caffe models to PyTorch.
- extension-cpp: C++ extensions in PyTorch
- pytoune: A Keras-like framework and utilities for PyTorch
- jetson-reinforcement: Deep reinforcement learning libraries for NVIDIA Jetson TX1/TX2 with PyTorch, OpenAI Gym, and Gazebo robotics simulator.
- matchbox: Write PyTorch code at the level of individual examples, then run it efficiently on minibatches.
- torch-two-sample: A PyTorch library for two-sample tests
- pytorch-summary: Model summary in PyTorch similar to
model.summary()
in Keras
- mpl.pytorch: Pytorch implementation of MaxPoolingLoss.
- scVI-dev: Development branch of the scVI project in PyTorch
- apex: An Experimental PyTorch Extension(will be deprecated at a later point)
- ELF: ELF: a platform for game research.
- Torchlite: A high level library on top of(not only) Pytorch
- joint-vae: Pytorch implementation of JointVAE, a framework for disentangling continuous and discrete factors of variation star2
- SLM-Lab: Modular Deep Reinforcement Learning framework in PyTorch.
- bindsnet: A Python package used for simulating spiking neural networks (SNNs) on CPUs or GPUs using PyTorch
- pro_gan_pytorch: ProGAN package implemented as an extension of PyTorch nn.Module
- pytorch_geometric: Geometric Deep Learning Extension Library for PyTorch
- torchplus: Implements the + operator on PyTorch modules, returning sequences.
- lagom: lagom: A light PyTorch infrastructure to quickly prototype reinforcement learning algorithms.
- torchbearer: torchbearer: A model training library for researchers using PyTorch.
- pytorch-maml-rl: Reinforcement Learning with Model-Agnostic Meta-Learning in Pytorch.
- NALU: Basic pytorch implementation of NAC/NALU from Neural Arithmetic Logic Units paper by trask et.al arxiv.org/pdf/1808.00508.pdf
- QuCumber: Neural Network Many-Body Wavefunction Reconstruction
- magnet: Deep Learning Projects that Build Themselves http://magnet-dl.readthedocs.io/
- opencv_transforms: OpenCV implementation of Torchvision’s image augmentations
- fastai: The fast.ai deep learning library, lessons, and tutorials
- pytorch-dense-correspondence: Code for “Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation” arxiv.org/pdf/1806.08756.pdf
- colorization-pytorch: PyTorch reimplementation of Interactive Deep Colorization richzhang.github.io/ideepcolor
- beauty-net: A simple, flexible, and extensible template for PyTorch. It’s beautiful.
- OpenChem: OpenChem: Deep Learning toolkit for Computational Chemistry and Drug Design Research mariewelt.github.io/OpenChem
- torchani: Accurate Neural Network Potential on PyTorch aiqm.github.io/torchani
- PyTorch-LBFGS: A PyTorch implementation of L-BFGS.
- gpytorch: A highly efficient and modular implementation of Gaussian Processes in PyTorch.
- hessian: hessian in pytorch.
- vel: Velocity in deep-learning research.
- nonechucks: Skip bad items in your PyTorch DataLoader, use Transforms as Filters, and more!
- torchstat: Model analyzer in PyTorch.
- QNNPACK: Quantized Neural Network PACKage – mobile-optimized implementation of quantized neural network operators.
- torchdiffeq: Differentiable ODE solvers with full GPU support and O(1)-memory backpropagation.
- redner: A differentiable Monte Carlo path tracer
- pixyz: a library for developing deep generative models in a more concise, intuitive and extendable way.
- euclidesdb: A multi-model machine learning feature embedding database http://euclidesdb.readthedocs.io
- pytorch2keras: Convert PyTorch dynamic graph to Keras model.
- salad: Semi-Supervised Learning and Domain Adaptation.
- netharn: Parameterized fit and prediction harnesses for pytorch.
- dgl: Python package built to ease deep learning on graph, on top of existing DL frameworks. http://dgl.ai.
- gandissect: Pytorch-based tools for visualizing and understanding the neurons of a GAN. gandissect.csail.mit.edu
- delira: Lightweight framework for fast prototyping and training deep neural networks in medical imaging delira.rtfd.io
- mushroom: Python library for Reinforcement Learning experiments.
- Xlearn: Transfer Learning Library
- geoopt: Riemannian Adaptive Optimization Methods with pytorch optim
- vegans: A library providing various existing GANs in PyTorch.
- torchgeometry: TGM: PyTorch Geometry
- AdverTorch: A Toolbox for Adversarial Robustness (attack/defense/training) Research
- AdaBound: An optimizer that trains as fast as Adam and as good as SGD.a
- fenchel-young-losses: Probabilistic classification in PyTorch/TensorFlow/scikit-learn with Fenchel-Young losses
- pytorch-OpCounter: Count the FLOPs of your PyTorch model.
- Tor10: A Generic Tensor-Network library that is designed for quantum simulation, base on the pytorch.
- Catalyst: High-level utils for PyTorch DL & RL research. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing. Being able to research/develop something new, rather than write another regular train loop.
- Ax: Adaptive Experimentation Platform
- pywick: High-level batteries-included neural network training library for Pytorch
- torchgpipe: A GPipe implementation in PyTorch torchgpipe.readthedocs.io
- hub: Pytorch Hub is a pre-trained model repository designed to facilitate research reproducibility.
- pytorch-lightning: Rapid research framework for Pytorch. The researcher’s version of keras.
- Tor10: A Generic Tensor-Network library that is designed for quantum simulation, base on the pytorch.
- tensorwatch: Debugging, monitoring and visualization for Deep Learning and Reinforcement Learning from Microsoft Research.
- wavetorch: Numerically solving and backpropagating through the wave equation arxiv.org/abs/1904.12831
- diffdist: diffdist is a python library for pytorch. It extends the default functionality of torch.autograd and adds support for differentiable communication between processes.
- torchprof: A minimal dependency library for layer-by-layer profiling of Pytorch models.
- osqpth: The differentiable OSQP solver layer for PyTorch.
- mctorch: A manifold optimization library for deep learning.
- pytorch-hessian-eigenthings: Efficient PyTorch Hessian eigendecomposition using the Hessian-vector product and stochastic power iteration.
- MinkowskiEngine: Minkowski Engine is an auto-diff library for generalized sparse convolutions and high-dimensional sparse tensors.
- pytorch-cpp-rl: PyTorch C++ Reinforcement Learning
- pytorch-toolbelt: PyTorch extensions for fast R&D prototyping and Kaggle farming
- argus-tensor-stream: A library for real-time video stream decoding to CUDA memory tensorstream.argus-ai.com
- macarico: learning to search in pytorch
- rlpyt: Reinforcement Learning in PyTorch
- pywarm: A cleaner way to build neural networks for PyTorch. blue-season.github.io/pywarm
- learn2learn: PyTorch Meta-learning Framework for Researchers http://learn2learn.net
- torchbeast: A PyTorch Platform for Distributed RL
- higher: higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual training steps.
- Torchelie: Torchélie is a set of utility functions, layers, losses, models, trainers and other things for PyTorch. torchelie.readthedocs.org
- CrypTen: CrypTen is a Privacy Preserving Machine Learning framework written using PyTorch that allows researchers and developers to train models using encrypted data. CrypTen currently supports Secure multi-party computation as its encryption mechanism.
- cvxpylayers: cvxpylayers is a Python library for constructing differentiable convex optimization layers in PyTorch
- RepDistiller: Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methods
- kaolin: PyTorch library aimed at accelerating 3D deep learning research
- PySNN: Efficient Spiking Neural Network framework, built on top of PyTorch for GPU acceleration.
- sparktorch: Train and run Pytorch models on Apache Spark.
- pytorch-metric-learning: The easiest way to use metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
- autonomous-learning-library: A PyTorch library for building deep reinforcement learning agents.
- flambe: An ML framework to accelerate research and its path to production. flambe.ai
- pytorch-optimizer: Collections of modern optimization algorithms for PyTorch, includes: AccSGD, AdaBound, AdaMod, DiffGrad, Lamb, RAdam, RAdam, Yogi.
- PyTorch-VAE: A Collection of Variational Autoencoders (VAE) in PyTorch.
- ray: A fast and simple framework for building and running distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library. ray.io
- Pytorch Geometric Temporal: A temporal extension library for PyTorch Geometric
- Poutyne: A Keras-like framework for PyTorch that handles much of the boilerplating code needed to train neural networks.
- Pytorch-Toolbox: This is toolbox project for Pytorch. Aiming to make you write Pytorch code more easier, readable and concise.
- Pytorch-contrib: It contains reviewed implementations of ideas from recent machine learning papers.
- EfficientNet PyTorch: It contains an op-for-op PyTorch reimplementation of EfficientNet, along with pre-trained models and examples.
- PyTorch/XLA: PyTorch/XLA is a Python package that uses the XLA deep learning compiler to connect the PyTorch deep learning framework and Cloud TPUs.
- webdataset: WebDataset is a PyTorch Dataset (IterableDataset) implementation providing efficient access to datasets stored in POSIX tar archives.
- volksdep: volksdep is an open-source toolbox for deploying and accelerating PyTorch, Onnx and Tensorflow models with TensorRT.
Tutorials, books, & examples
- Practical Pytorch: Tutorials explaining different RNN models
- DeepLearningForNLPInPytorch: An IPython Notebook tutorial on deep learning, with an emphasis on Natural Language Processing.
- pytorch-tutorial: tutorial for researchers to learn deep learning with pytorch.
- pytorch-exercises: pytorch-exercises collection.
- pytorch tutorials: Various pytorch tutorials.
- pytorch examples: A repository showcasing examples of using pytorch
- pytorch practice: Some example scripts on pytorch.
- pytorch mini tutorials: Minimal tutorials for PyTorch adapted from Alec Radford’s Theano tutorials.
- pytorch text classification: A simple implementation of CNN based text classification in Pytorch
- cats vs dogs: Example of network fine-tuning in pytorch for the kaggle competition Dogs vs. Cats Redux: Kernels Edition. Currently #27 (0.05074) on the leaderboard.
- convnet: This is a complete training example for Deep Convolutional Networks on various datasets (ImageNet, Cifar10, Cifar100, MNIST).
- pytorch-generative-adversarial-networks: simple generative adversarial network (GAN) using PyTorch.
- pytorch containers: This repository aims to help former Torchies more seamlessly transition to the “Containerless” world of PyTorch by providing a list of PyTorch implementations of Torch Table Layers.
- T-SNE in pytorch: t-SNE experiments in pytorch
- AAE_pytorch: Adversarial Autoencoders (with Pytorch).
- Kind_PyTorch_Tutorial: Kind PyTorch Tutorial for beginners.
- pytorch-poetry-gen: a char-RNN based on pytorch.
- pytorch-REINFORCE: PyTorch implementation of REINFORCE, This repo supports both continuous and discrete environments in OpenAI gym.
- PyTorch-Tutorial: Build your neural network easy and fast https://morvanzhou.github.io/tutorials/
- pytorch-intro: A couple of scripts to illustrate how to do CNNs and RNNs in PyTorch
- pytorch-classification: A unified framework for the image classification task on CIFAR-10/100 and ImageNet.
- pytorch_notebooks – hardmaru: Random tutorials created in NumPy and PyTorch.
- pytorch_tutoria-quick: Quick PyTorch introduction and tutorial. Targets computer vision, graphics and machine learning researchers eager to try a new framework.
- Pytorch_fine_tuning_Tutorial: A short tutorial on performing fine tuning or transfer learning in PyTorch.
- pytorch_exercises: pytorch-exercises
- traffic-sign-detection: nyu-cv-fall-2017 example
- mss_pytorch: Singing Voice Separation via Recurrent Inference and Skip-Filtering Connections – PyTorch Implementation. Demo: js-mim.github.io/mss_pytorch
- DeepNLP-models-Pytorch Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ: NLP with Deep Learning)
- Mila introductory tutorials: Various tutorials given for welcoming new students at MILA.
- pytorch.rl.learning: for learning reinforcement learning using PyTorch.
- minimal-seq2seq: Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch
- tensorly-notebooks: Tensor methods in Python with TensorLy tensorly.github.io/dev
- pytorch_bits: time-series prediction related examples.
- skip-thoughts: An implementation of Skip-Thought Vectors in PyTorch.
- video-caption-pytorch: pytorch code for video captioning.
- Capsule-Network-Tutorial: Pytorch easy-to-follow Capsule Network tutorial.
- code-of-learn-deep-learning-with-pytorch: This is code of book “Learn Deep Learning with PyTorch” item.jd.com/17915495606.html
- RL-Adventure: Pytorch easy-to-follow step-by-step Deep Q Learning tutorial with clean readable code.
- accelerated_dl_pytorch: Accelerated Deep Learning with PyTorch at Jupyter Day Atlanta II.
- RL-Adventure-2: PyTorch4 tutorial of: actor critic / proximal policy optimization / acer / ddpg / twin dueling ddpg / soft actor critic / generative adversarial imitation learning / hindsight experience replay
- Generative Adversarial Networks (GANs) in 50 lines of code (PyTorch)
- adversarial-autoencoders-with-pytorch
- transfer learning using pytorch
- how-to-implement-a-yolo-object-detector-in-pytorch
- pytorch-for-recommenders-101
- pytorch-for-numpy-users
- PyTorch Tutorial: PyTorch Tutorials in Chinese.
- grokking-pytorch: The Hitchiker’s Guide to PyTorch
- PyTorch-Deep-Learning-Minicourse: Minicourse in Deep Learning with PyTorch.
- pytorch-custom-dataset-examples: Some custom dataset examples for PyTorch
- Multiplicative LSTM for sequence-based Recommenders
- deeplearning.ai-pytorch: PyTorch Implementations of Coursera’s Deep Learning(deeplearning.ai) Specialization.
- MNIST_Pytorch_python_and_capi: This is an example of how to train a MNIST network in Python and run it in c++ with pytorch 1.0
- torch_light: Tutorials and examples include Reinforcement Training, NLP, CV
- portrain-gan: torch code to decode (and almost encode) latents from art-DCGAN’s Portrait GAN.
- mri-analysis-pytorch: MRI analysis using PyTorch and MedicalTorch
- cifar10-fast: Demonstration of training a small ResNet on CIFAR10 to 94% test accuracy in 79 seconds as described in this blog series.
- Intro to Deep Learning with PyTorch: A free course by Udacity and facebook, with a good intro to PyTorch, and an interview with Soumith Chintala, one of the original authors of PyTorch.
- pytorch-sentiment-analysis: Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
- pytorch-image-models: PyTorch image models, scripts, pretrained weights — (SE)ResNet/ResNeXT, DPN, EfficientNet, MobileNet-V3/V2/V1, MNASNet, Single-Path NAS, FBNet, and more.
- CIFAR-ZOO: Pytorch implementation for multiple CNN architectures and improve methods with state-of-the-art results.
- d2l-pytorch: This is an attempt to modify Dive into Deep Learning, Berkeley STAT 157 (Spring 2019) textbook’s code into PyTorch.
- thinking-in-tensors-writing-in-pytorch: Thinking in tensors, writing in PyTorch (a hands-on deep learning intro).
- NER-BERT-pytorch: PyTorch solution of named entity recognition task Using Google AI’s pre-trained BERT model.
- pytorch-sync-batchnorm-example: How to use Cross Replica / Synchronized Batchnorm in Pytorch.
- SentimentAnalysis: Sentiment analysis neural network trained by fine tuning BERT on the Stanford Sentiment Treebank, thanks to Hugging Face‘s Transformers library.
- pytorch-cpp: C++ implementations of PyTorch tutorials for deep learning researchers (based on the Python tutorials from pytorch-tutorial).
- Deep Learning with PyTorch: Zero to GANs: Interactive and coding-focused tutorial series on introduction to Deep Learning with PyTorch (video).
- Deep Learning with PyTorch: Deep Learning with PyTorch teaches you how to implement deep learning algorithms with Python and PyTorch, the book includes a case study: building an algorithm capable of detecting malignant lung tumors using CT scans.
- Serverless Machine Learning in Action with PyTorch and AWS: Serverless Machine Learning in Action is a guide to bringing your experimental PyTorch machine learning code to production using serverless capabilities from major cloud providers like AWS, Azure, or GCP.
Paper implementations
- google_evolution: This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al.
- pyscatwave: Fast Scattering Transform with CuPy/PyTorch,read the paper here
- scalingscattering: Scaling The Scattering Transform : Deep Hybrid Networks.
- deep-auto-punctuation: a pytorch implementation of auto-punctuation learned character by character.
- Realtime_Multi-Person_Pose_Estimation: This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here .
- PyTorch-value-iteration-networks: PyTorch implementation of the Value Iteration Networks (NIPS ’16) paper
- pytorch_Highway: Highway network implemented in pytorch.
- pytorch_NEG_loss: NEG loss implemented in pytorch.
- pytorch_RVAE: Recurrent Variational Autoencoder that generates sequential data implemented in pytorch.
- pytorch_TDNN: Time Delayed NN implemented in pytorch.
- eve.pytorch: An implementation of Eve Optimizer, proposed in Imploving Stochastic Gradient Descent with Feedback, Koushik and Hayashi, 2016.
- e2e-model-learning: Task-based end-to-end model learning.
- pix2pix-pytorch: PyTorch implementation of “Image-to-Image Translation Using Conditional Adversarial Networks”.
- Single Shot MultiBox Detector: A PyTorch Implementation of Single Shot MultiBox Detector.
- DiscoGAN: PyTorch implementation of “Learning to Discover Cross-Domain Relations with Generative Adversarial Networks”
- official DiscoGAN implementation: Official implementation of “Learning to Discover Cross-Domain Relations with Generative Adversarial Networks”.
- pytorch-es: This is a PyTorch implementation of Evolution Strategies .
- piwise: Pixel-wise segmentation on VOC2012 dataset using pytorch.
- pytorch-dqn: Deep Q-Learning Network in pytorch.
- neuraltalk2-pytorch: image captioning model in pytorch(finetunable cnn in branch with_finetune)
- vnet.pytorch: A Pytorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation.
- pytorch-fcn: PyTorch implementation of Fully Convolutional Networks.
- WideResNets: WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than what is required by the official Torch implementation: https://github.com/szagoruyko/wide-residual-networks .
- pytorch_highway_networks: Highway networks implemented in PyTorch.
- pytorch-NeuCom: Pytorch implementation of DeepMind’s differentiable neural computer paper.
- captionGen: Generate captions for an image using PyTorch.
- AnimeGAN: A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing.
- Cnn-text classification: This is the implementation of Kim’s Convolutional Neural Networks for Sentence Classification paper in PyTorch.
- deepspeech2: Implementation of DeepSpeech2 using Baidu Warp-CTC. Creates a network based on the DeepSpeech2 architecture, trained with the CTC activation function.
- seq2seq: This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch
- Asynchronous Advantage Actor-Critic in PyTorch: This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learning. Since PyTorch has a easy method to control shared memory within multiprocess, we can easily implement asynchronous method like A3C.
- densenet: This is a PyTorch implementation of the DenseNet-BC architecture as described in the paper Densely Connected Convolutional Networks by G. Huang, Z. Liu, K. Weinberger, and L. van der Maaten. This implementation gets a CIFAR-10+ error rate of 4.77 with a 100-layer DenseNet-BC with a growth rate of 12. Their official implementation and links to many other third-party implementations are available in the liuzhuang13/DenseNet repo on GitHub.
- nninit: Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin.
- faster rcnn: This is a PyTorch implementation of Faster RCNN. This project is mainly based on py-faster-rcnn and TFFRCNN.For details about R-CNN please refer to the paper Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks by Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun.
- doomnet: PyTorch’s version of Doom-net implementing some RL models in ViZDoom environment.
- flownet: Pytorch implementation of FlowNet by Dosovitskiy et al.
- sqeezenet: Implementation of Squeezenet in pytorch, #### pretrained models on CIFAR10 data to come Plan to train the model on cifar 10 and add block connections too.
- WassersteinGAN: wassersteinGAN in pytorch.
- optnet: This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch source code to reproduce the experiments in our paper OptNet: Differentiable Optimization as a Layer in Neural Networks.
- qp solver: A fast and differentiable QP solver for PyTorch. Crafted by Brandon Amos and J. Zico Kolter.
- Continuous Deep Q-Learning with Model-based Acceleration : Reimplementation of Continuous Deep Q-Learning with Model-based Acceleration.
- Learning to learn by gradient descent by gradient descent: PyTorch implementation of Learning to learn by gradient descent by gradient descent.
- fast-neural-style: pytorch implementation of fast-neural-style, The model uses the method described in Perceptual Losses for Real-Time Style Transfer and Super-Resolution along with Instance Normalization.
- PytorchNeuralStyleTransfer: Implementation of Neural Style Transfer in Pytorch.
- Fast Neural Style for Image Style Transform by Pytorch: Fast Neural Style for Image Style Transform by Pytorch .
- neural style transfer: An introduction to PyTorch through the Neural-Style algorithm (https://arxiv.org/abs/1508.06576) developed by Leon A. Gatys, Alexander S. Ecker and Matthias Bethge.
- VIN_PyTorch_Visdom: PyTorch implementation of Value Iteration Networks (VIN): Clean, Simple and Modular. Visualization in Visdom.
- YOLO2: YOLOv2 in PyTorch.
- attention-transfer: Attention transfer in pytorch, read the paper here.
- SVHNClassifier: A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks.
- pytorch-deform-conv: PyTorch implementation of Deformable Convolution.
- BEGAN-pytorch: PyTorch implementation of BEGAN: Boundary Equilibrium Generative Adversarial Networks.
- treelstm.pytorch: Tree LSTM implementation in PyTorch.
- AGE: Code for paper “Adversarial Generator-Encoder Networks” by Dmitry Ulyanov, Andrea Vedaldi and Victor Lempitsky which can be found here
- ResNeXt.pytorch: Reproduces ResNet-V3 (Aggregated Residual Transformations for Deep Neural Networks) with pytorch.
- pytorch-rl: Deep Reinforcement Learning with pytorch & visdom
- Deep-Leafsnap: LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.
- pytorch-CycleGAN-and-pix2pix: PyTorch implementation for both unpaired and paired image-to-image translation.
- A3C-PyTorch:PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch
- pytorch-value-iteration-networks: Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)
- PyTorch-Style-Transfer: PyTorch Implementation of Multi-style Generative Network for Real-time Transfer
- pytorch-deeplab-resnet: pytorch-deeplab-resnet-model.
- pointnet.pytorch: pytorch implementation for “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation” https://arxiv.org/abs/1612.00593
- pytorch-playground: Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet).
- pytorch-dnc: Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom.
- pytorch_image_classifier: Minimal But Practical Image Classifier Pipline Using Pytorch, Finetune on ResNet18, Got 99% Accuracy on Own Small Datasets.
- mnist-svhn-transfer: PyTorch Implementation of CycleGAN and SGAN for Domain Transfer (Minimal).
- pytorch-yolo2: pytorch-yolo2
- dni: Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch
- wgan-gp: A pytorch implementation of Paper “Improved Training of Wasserstein GANs”.
- pytorch-seq2seq-intent-parsing: Intent parsing and slot filling in PyTorch with seq2seq + attention
- pyTorch_NCE: An implementation of the Noise Contrastive Estimation algorithm for pyTorch. Working, yet not very efficient.
- molencoder: Molecular AutoEncoder in PyTorch
- GAN-weight-norm: Code for “On the Effects of Batch and Weight Normalization in Generative Adversarial Networks”
- lgamma: Implementations of polygamma, lgamma, and beta functions for PyTorch
- bigBatch: Code used to generate the results appearing in “Train longer, generalize better: closing the generalization gap in large batch training of neural networks”
- rl_a3c_pytorch: Reinforcement learning with implementation of A3C LSTM for Atari 2600.
- pytorch-retraining: Transfer Learning Shootout for PyTorch’s model zoo (torchvision)
- nmp_qc: Neural Message Passing for Computer Vision
- grad-cam: Pytorch implementation of Grad-CAM
- pytorch-trpo: PyTorch Implementation of Trust Region Policy Optimization (TRPO)
- pytorch-explain-black-box: PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation
- vae_vpflows: Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling https://jmtomczak.github.io/deebmed.html
- relational-networks: Pytorch implementation of “A simple neural network module for relational reasoning” (Relational Networks) https://arxiv.org/pdf/1706.01427.pdf
- vqa.pytorch: Visual Question Answering in Pytorch
- end-to-end-negotiator: Deal or No Deal? End-to-End Learning for Negotiation Dialogues
- odin-pytorch: Principled Detection of Out-of-Distribution Examples in Neural Networks.
- FreezeOut: Accelerate Neural Net Training by Progressively Freezing Layers.
- ARAE: Code for the paper “Adversarially Regularized Autoencoders for Generating Discrete Structures” by Zhao, Kim, Zhang, Rush and LeCun.
- forward-thinking-pytorch: Pytorch implementation of “Forward Thinking: Building and Training Neural Networks One Layer at a Time” https://arxiv.org/pdf/1706.02480.pdf
- context_encoder_pytorch: PyTorch Implement of Context Encoders
- attention-is-all-you-need-pytorch: A PyTorch implementation of the Transformer model in “Attention is All You Need”.https://github.com/thnkim/OpenFacePytorch
- OpenFacePytorch: PyTorch module to use OpenFace’s nn4.small2.v1.t7 model
- neural-combinatorial-rl-pytorch: PyTorch implementation of Neural Combinatorial Optimization with Reinforcement Learning.
- pytorch-nec: PyTorch Implementation of Neural Episodic Control (NEC)
- seq2seq.pytorch: Sequence-to-Sequence learning using PyTorch
- Pytorch-Sketch-RNN: a pytorch implementation of arxiv.org/abs/1704.03477
- pytorch-pruning: PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
- DrQA: A pytorch implementation of Reading Wikipedia to Answer Open-Domain Questions.
- YellowFin_Pytorch: auto-tuning momentum SGD optimizer
- samplernn-pytorch: PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model.
- AEGeAN: Deeper DCGAN with AE stabilization
- /pytorch-SRResNet: pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802v2
- vsepp: Code for the paper “VSE++: Improved Visual Semantic Embeddings”
- Pytorch-DPPO: Pytorch implementation of Distributed Proximal Policy Optimization: arxiv.org/abs/1707.02286
- UNIT: PyTorch Implementation of our Coupled VAE-GAN algorithm for Unsupervised Image-to-Image Translation
- efficient_densenet_pytorch: A memory-efficient implementation of DenseNets
- tsn-pytorch: Temporal Segment Networks (TSN) in PyTorch.
- SMASH: An experimental technique for efficiently exploring neural architectures.
- pytorch-retinanet: RetinaNet in PyTorch
- biogans: Implementation supporting the ICCV 2017 paper “GANs for Biological Image Synthesis”.
- Semantic Image Synthesis via Adversarial Learning: A PyTorch implementation of the paper “Semantic Image Synthesis via Adversarial Learning” in ICCV 2017.
- fmpytorch: A PyTorch implementation of a Factorization Machine module in cython.
- ORN: A PyTorch implementation of the paper “Oriented Response Networks” in CVPR 2017.
- pytorch-maml: PyTorch implementation of MAML: arxiv.org/abs/1703.03400
- pytorch-generative-model-collections: Collection of generative models in Pytorch version.
- vqa-winner-cvprw-2017: Pytorch Implementation of winner from VQA Chllange Workshop in CVPR’17.
- tacotron_pytorch: PyTorch implementation of Tacotron speech synthesis model.
- pspnet-pytorch: PyTorch implementation of PSPNet segmentation network
- LM-LSTM-CRF: Empower Sequence Labeling with Task-Aware Language Model http://arxiv.org/abs/1709.04109
- face-alignment: Pytorch implementation of the paper “How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)”, ICCV 2017
- DepthNet: PyTorch DepthNet Training on Still Box dataset.
- EDSR-PyTorch: PyTorch version of the paper ‘Enhanced Deep Residual Networks for Single Image Super-Resolution’ (CVPRW 2017)
- e2c-pytorch: Embed to Control implementation in PyTorch.
- 3D-ResNets-PyTorch: 3D ResNets for Action Recognition.
- bandit-nmt: This is code repo for our EMNLP 2017 paper “Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback”, which implements the A2C algorithm on top of a neural encoder-decoder model and benchmarks the combination under simulated noisy rewards.
- pytorch-a2c-ppo-acktr: PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO) and Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR).
- zalando-pytorch: Various experiments on the Fashion-MNIST dataset from Zalando.
- sphereface_pytorch: A PyTorch Implementation of SphereFace.
- Categorical DQN: A PyTorch Implementation of Categorical DQN from A Distributional Perspective on Reinforcement Learning.
- pytorch-ntm: pytorch ntm implementation.
- mask_rcnn_pytorch: Mask RCNN in PyTorch.
- graph_convnets_pytorch: PyTorch implementation of graph ConvNets, NIPS’16
- pytorch-faster-rcnn: A pytorch implementation of faster RCNN detection framework based on Xinlei Chen’s tf-faster-rcnn.
- torchMoji: A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc.
- semantic-segmentation-pytorch: Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset
- pytorch-qrnn: PyTorch implementation of the Quasi-Recurrent Neural Network – up to 16 times faster than NVIDIA’s cuDNN LSTM
- pytorch-sgns: Skipgram Negative Sampling in PyTorch.
- SfmLearner-Pytorch : Pytorch version of SfmLearner from Tinghui Zhou et al.
- deformable-convolution-pytorch: PyTorch implementation of Deformable Convolution.
- skip-gram-pytorch: A complete pytorch implementation of skipgram model (with subsampling and negative sampling). The embedding result is tested with Spearman’s rank correlation.
- stackGAN-v2: Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks by Han Zhang*, Tao Xu*, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris Metaxas.
- self-critical.pytorch: Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning.
- pygcn: Graph Convolutional Networks in PyTorch.
- dnc: Differentiable Neural Computers, for Pytorch
- prog_gans_pytorch_inference: PyTorch inference for “Progressive Growing of GANs” with CelebA snapshot.
- pytorch-capsule: Pytorch implementation of Hinton’s Dynamic Routing Between Capsules.
- PyramidNet-PyTorch: A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks, arxiv.org/abs/1610.02915)
- radio-transformer-networks: A PyTorch implementation of Radio Transformer Networks from the paper “An Introduction to Deep Learning for the Physical Layer”. arxiv.org/abs/1702.00832
- honk: PyTorch reimplementation of Google’s TensorFlow CNNs for keyword spotting.
- DeepCORAL: A PyTorch implementation of ‘Deep CORAL: Correlation Alignment for Deep Domain Adaptation.’, ECCV 2016
- pytorch-pose: A PyTorch toolkit for 2D Human Pose Estimation.
- lang-emerge-parlai: Implementation of EMNLP 2017 Paper “Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog” using PyTorch and ParlAI
- Rainbow: Rainbow: Combining Improvements in Deep Reinforcement Learning
- pytorch_compact_bilinear_pooling v1: This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.
- CompactBilinearPooling-Pytorch v2: (Yang Gao, et al.) A Pytorch Implementation for Compact Bilinear Pooling.
- FewShotLearning: Pytorch implementation of the paper “Optimization as a Model for Few-Shot Learning”
- meProp: Codes for “meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting”.
- SFD_pytorch: A PyTorch Implementation of Single Shot Scale-invariant Face Detector.
- GradientEpisodicMemory: Continuum Learning with GEM: Gradient Episodic Memory. https://arxiv.org/abs/1706.08840
- DeblurGAN: Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks.
- StarGAN: StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tranlsation.
- CapsNet-pytorch: PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules.
- CondenseNet: CondenseNet: An Efficient DenseNet using Learned Group Convolutions.
- deep-image-prior: Image restoration with neural networks but without learning.
- deep-head-pose: Deep Learning Head Pose Estimation using PyTorch.
- Random-Erasing: This code has the source code for the paper “Random Erasing Data Augmentation”.
- FaderNetworks: Fader Networks: Manipulating Images by Sliding Attributes – NIPS 2017
- FlowNet 2.0: FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
- pix2pixHD: Synthesizing and manipulating 2048×1024 images with conditional GANs tcwang0509.github.io/pix2pixHD
- pytorch-smoothgrad: SmoothGrad implementation in PyTorch
- RetinaNet: An implementation of RetinaNet in PyTorch.
- faster-rcnn.pytorch: This project is a faster faster R-CNN implementation, aimed to accelerating the training of faster R-CNN object detection models.
- mixup_pytorch: A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch.
- inplace_abn: In-Place Activated BatchNorm for Memory-Optimized Training of DNNs
- pytorch-pose-hg-3d: PyTorch implementation for 3D human pose estimation
- nmn-pytorch: Neural Module Network for VQA in Pytorch.
- bytenet: Pytorch implementation of bytenet from “Neural Machine Translation in Linear Time” paper
- bottom-up-attention-vqa: vqa, bottom-up-attention, pytorch
- yolo2-pytorch: The YOLOv2 is one of the most popular one-stage object detector. This project adopts PyTorch as the developing framework to increase productivity, and utilize ONNX to convert models into Caffe 2 to benifit engineering deployment.
- reseg-pytorch: PyTorch Implementation of ReSeg (arxiv.org/pdf/1511.07053.pdf)
- binary-stochastic-neurons: Binary Stochastic Neurons in PyTorch.
- pytorch-pose-estimation: PyTorch Implementation of Realtime Multi-Person Pose Estimation project.
- interaction_network_pytorch: Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics.
- NoisyNaturalGradient: Pytorch Implementation of paper “Noisy Natural Gradient as Variational Inference”.
- ewc.pytorch: An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural networks 2016(10.1073/pnas.1611835114).
- pytorch-zssr: PyTorch implementation of 1712.06087 “Zero-Shot” Super-Resolution using Deep Internal Learning
- deep_image_prior: An implementation of image reconstruction methods from Deep Image Prior (Ulyanov et al., 2017) in PyTorch.
- pytorch-transformer: pytorch implementation of Attention is all you need.
- DeepRL-Grounding: This is a PyTorch implementation of the AAAI-18 paper Gated-Attention Architectures for Task-Oriented Language Grounding
- deep-forecast-pytorch: Wind Speed Prediction using LSTMs in PyTorch (arxiv.org/pdf/1707.08110.pdf)
- cat-net: Canonical Appearance Transformations
- minimal_glo: Minimal PyTorch implementation of Generative Latent Optimization from the paper “Optimizing the Latent Space of Generative Networks”
- LearningToCompare-Pytorch: Pytorch Implementation for Paper: Learning to Compare: Relation Network for Few-Shot Learning.
- poincare-embeddings: PyTorch implementation of the NIPS-17 paper “Poincaré Embeddings for Learning Hierarchical Representations”.
- pytorch-trpo(Hessian-vector product version): This is a PyTorch implementation of “Trust Region Policy Optimization (TRPO)” with exact Hessian-vector product instead of finite differences approximation.
- ggnn.pytorch: A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN).
- visual-interaction-networks-pytorch: This’s an implementation of deepmind Visual Interaction Networks paper using pytorch
- adversarial-patch: PyTorch implementation of adversarial patch.
- Prototypical-Networks-for-Few-shot-Learning-PyTorch: Implementation of Prototypical Networks for Few Shot Learning (arxiv.org/abs/1703.05175) in Pytorch
- Visual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch: Implementation of Visual Feature Attribution using Wasserstein GANs (arxiv.org/abs/1711.08998) in PyTorch.
- PhotographicImageSynthesiswithCascadedRefinementNetworks-Pytorch: Photographic Image Synthesis with Cascaded Refinement Networks – Pytorch Implementation
- ENAS-pytorch: PyTorch implementation of “Efficient Neural Architecture Search via Parameters Sharing”.
- Neural-IMage-Assessment: A PyTorch Implementation of Neural IMage Assessment.
- proxprop: Proximal Backpropagation – a neural network training algorithm that takes implicit instead of explicit gradient steps.
- FastPhotoStyle: A Closed-form Solution to Photorealistic Image Stylization
- Deep-Image-Analogy-PyTorch: A python implementation of Deep-Image-Analogy based on pytorch.
- Person-reID_pytorch: PyTorch for Person re-ID.
- pt-dilate-rnn: Dilated RNNs in pytorch.
- pytorch-i-revnet: Pytorch implementation of i-RevNets.
- OrthNet: TensorFlow and PyTorch layers for generating Orthogonal Polynomials.
- DRRN-pytorch: An implementation of Deep Recursive Residual Network for Super Resolution (DRRN), CVPR 2017
- shampoo.pytorch: An implementation of shampoo.
- Neural-IMage-Assessment 2: A PyTorch Implementation of Neural IMage Assessment.
- TCN: Sequence modeling benchmarks and temporal convolutional networks locuslab/TCN
- DCC: This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper.
- packnet: Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning arxiv.org/abs/1711.05769
- PyTorch-progressive_growing_of_gans: PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.
- nonauto-nmt: PyTorch Implementation of “Non-Autoregressive Neural Machine Translation”
- PyTorch-GAN: PyTorch implementations of Generative Adversarial Networks.
- PyTorchWavelets: PyTorch implementation of the wavelet analysis found in Torrence and Compo (1998)
- pytorch-made: MADE (Masked Autoencoder Density Estimation) implementation in PyTorch
- VRNN: Pytorch implementation of the Variational RNN (VRNN), from A Recurrent Latent Variable Model for Sequential Data.
- flow: Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.
- deepvoice3_pytorch: PyTorch implementation of convolutional networks-based text-to-speech synthesis models
- psmm: imlementation of the the Pointer Sentinel Mixture Model, as described in the paper by Stephen Merity et al.
- tacotron2: Tacotron 2 – PyTorch implementation with faster-than-realtime inference.
- AccSGD: Implements pytorch code for the Accelerated SGD algorithm.
- QANet-pytorch: an implementation of QANet with PyTorch (EM/F1 = 70.5/77.2 after 20 epoches for about 20 hours on one 1080Ti card.)
- ConvE: Convolutional 2D Knowledge Graph Embeddings
- Structured-Self-Attention: Implementation for the paper A Structured Self-Attentive Sentence Embedding, which is published in ICLR 2017: arxiv.org/abs/1703.03130 .
- graphsage-simple: Simple reference implementation of GraphSAGE.
- Detectron.pytorch: A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.
- R2Plus1D-PyTorch: PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper “A Closer Look at Spatiotemporal Convolutions for Action Recognition”
- StackNN: A PyTorch implementation of differentiable stacks for use in neural networks.
- translagent: Code for Emergent Translation in Multi-Agent Communication.
- ban-vqa: Bilinear attention networks for visual question answering.
- pytorch-openai-transformer-lm: This is a PyTorch implementation of the TensorFlow code provided with OpenAI’s paper “Improving Language Understanding by Generative Pre-Training” by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
- T2F: Text-to-Face generation using Deep Learning. This project combines two of the recent architectures StackGAN and ProGAN for synthesizing faces from textual descriptions.
- pytorch – fid: A Port of Fréchet Inception Distance (FID score) to PyTorch
- vae_vpflows:Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling jmtomczak.github.io/deebmed.html
- CoordConv-pytorch: Pytorch implementation of CoordConv introduced in ‘An intriguing failing of convolutional neural networks and the CoordConv solution’ paper. (arxiv.org/pdf/1807.03247.pdf)
- SDPoint: Implementation of “Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks”, published in CVPR 2018.
- SRDenseNet-pytorch: SRDenseNet-pytorch(ICCV_2017)
- GAN_stability: Code for paper “Which Training Methods for GANs do actually Converge? (ICML 2018)”
- Mask-RCNN: A PyTorch implementation of the architecture of Mask RCNN, serves as an introduction to working with PyTorch
- pytorch-coviar: Compressed Video Action Recognition
- PNASNet.pytorch: PyTorch implementation of PNASNet-5 on ImageNet.
- NALU-pytorch: Basic pytorch implementation of NAC/NALU from Neural Arithmetic Logic Units arxiv.org/pdf/1808.00508.pdf
- LOLA_DiCE: Pytorch implementation of LOLA (arxiv.org/abs/1709.04326) using DiCE (arxiv.org/abs/1802.05098)
- generative-query-network-pytorch: Generative Query Network (GQN) in PyTorch as described in “Neural Scene Representation and Rendering”
- pytorch_hmax: Implementation of the HMAX model of vision in PyTorch.
- FCN-pytorch-easiest: trying to be the most easiest and just get-to-use pytorch implementation of FCN (Fully Convolotional Networks)
- transducer: A Fast Sequence Transducer Implementation with PyTorch Bindings.
- AVO-pytorch: Implementation of Adversarial Variational Optimization in PyTorch.
- HCN-pytorch: A pytorch reimplementation of { Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation }.
- binary-wide-resnet: PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnel (ICLR 2018)
- piggyback: Code for Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights arxiv.org/abs/1801.06519
- vid2vid: Pytorch implementation of our method for high-resolution (e.g. 2048×1024) photorealistic video-to-video translation.
- poisson-convolution-sum: Implements an infinite sum of poisson-weighted convolutions
- tbd-nets: PyTorch implementation of “Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning” arxiv.org/abs/1803.05268
- attn2d: Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction
- yolov3: YOLOv3: Training and inference in PyTorch pjreddie.com/darknet/yolo
- deep-dream-in-pytorch: Pytorch implementation of the DeepDream computer vision algorithm.
- pytorch-flows: PyTorch implementations of algorithms for density estimation
- quantile-regression-dqn-pytorch: Quantile Regression DQN a Minimal Working Example
- relational-rnn-pytorch: An implementation of DeepMind’s Relational Recurrent Neural Networks in PyTorch.
- DEXTR-PyTorch: Deep Extreme Cut http://www.vision.ee.ethz.ch/~cvlsegmentation/dextr
- PyTorch_GBW_LM: PyTorch Language Model for Google Billion Word Dataset.
- Pytorch-NCE: The Noise Contrastive Estimation for softmax output written in Pytorch
- generative-models: Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN.
- convnet-aig: PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs.
- integrated-gradient-pytorch: This is the pytorch implementation of the paper – Axiomatic Attribution for Deep Networks.
- MalConv-Pytorch: Pytorch implementation of MalConv.
- trellisnet: Trellis Networks for Sequence Modeling
- Learning to Communicate with Deep Multi-Agent Reinforcement Learning: pytorch implementation of Learning to Communicate with Deep Multi-Agent Reinforcement Learning paper.
- pnn.pytorch: PyTorch implementation of CVPR’18 – Perturbative Neural Networks http://xujuefei.com/pnn.html.
- Face_Attention_Network: Pytorch implementation of face attention network as described in Face Attention Network: An Effective Face Detector for the Occluded Faces.
- waveglow: A Flow-based Generative Network for Speech Synthesis.
- deepfloat: This repository contains the SystemVerilog RTL, C++, HLS (Intel FPGA OpenCL to wrap RTL code) and Python needed to reproduce the numerical results in “Rethinking floating point for deep learning”
- EPSR: Pytorch implementation of Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network. This work has won the first place in PIRM2018-SR competition (region 1) held as part of the ECCV 2018.
- ClariNet: A Pytorch Implementation of ClariNet arxiv.org/abs/1807.07281
- pytorch-pretrained-BERT: PyTorch version of Google AI’s BERT model with script to load Google’s pre-trained models
- torch_waveglow: A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis.
- 3DDFA: The pytorch improved re-implementation of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.
- loss-landscape: loss-landscape Code for visualizing the loss landscape of neural nets.
- famos: Pytorch implementation of the paper “Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Image Stylization” available at http://arxiv.org/abs/1811.09236.
- back2future.pytorch: This is a Pytorch implementation of Janai, J., Güney, F., Ranjan, A., Black, M. and Geiger, A., Unsupervised Learning of Multi-Frame Optical Flow with Occlusions. ECCV 2018.
- FFTNet: Unofficial Implementation of FFTNet vocode paper.
- FaceBoxes.PyTorch: A PyTorch Implementation of FaceBoxes.
- Transformer-XL: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Contexthttps://github.com/kimiyoung/transformer-xl
- associative_compression_networks: Associative Compression Networks for Representation Learning.
- fluidnet_cxx: FluidNet re-written with ATen tensor lib.
- Deep-Reinforcement-Learning-Algorithms-with-PyTorch: This repository contains PyTorch implementations of deep reinforcement learning algorithms.
- Shufflenet-v2-Pytorch: This is a Pytorch implementation of faceplusplus’s ShuffleNet-v2.
- GraphWaveletNeuralNetwork: This is a Pytorch implementation of Graph Wavelet Neural Network. ICLR 2019.
- AttentionWalk: This is a Pytorch implementation of Watch Your Step: Learning Node Embeddings via Graph Attention. NIPS 2018.
- SGCN: This is a Pytorch implementation of Signed Graph Convolutional Network. ICDM 2018.
- SINE: This is a Pytorch implementation of SINE: Scalable Incomplete Network Embedding. ICDM 2018.
- GAM: This is a Pytorch implementation of Graph Classification using Structural Attention. KDD 2018.
- neural-style-pt: A PyTorch implementation of Justin Johnson’s Neural-style.
- TuckER: TuckER: Tensor Factorization for Knowledge Graph Completion.
- pytorch-prunes: Pruning neural networks: is it time to nip it in the bud?
- SimGNN: SimGNN: A Neural Network Approach to Fast Graph Similarity Computation.
- Character CNN: PyTorch implementation of the Character-level Convolutional Networks for Text Classification paper.
- XLM: PyTorch original implementation of Cross-lingual Language Model Pretraining.
- DiffAI: A provable defense against adversarial examples and library for building compatible PyTorch models.
- APPNP: Combining Neural Networks with Personalized PageRank for Classification on Graphs. ICLR 2019.
- NGCN: A Higher-Order Graph Convolutional Layer. NeurIPS 2018.
- gpt-2-Pytorch: Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation
- Splitter: Splitter: Learning Node Representations that Capture Multiple Social Contexts. (WWW 2019).
- CapsGNN: Capsule Graph Neural Network. (ICLR 2019).
- BigGAN-PyTorch: The author’s officially unofficial PyTorch BigGAN implementation.
- ppo_pytorch_cpp: This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch.
- RandWireNN: Implementation of: “Exploring Randomly Wired Neural Networks for Image Recognition”.
- Zero-shot Intent CapsNet: GPU-accelerated PyTorch implementation of “Zero-shot User Intent Detection via Capsule Neural Networks”.
- SEAL-CI Semi-Supervised Graph Classification: A Hierarchical Graph Perspective. (WWW 2019).
- MixHop: MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing. ICML 2019.
- densebody_pytorch: PyTorch implementation of CloudWalk’s recent paper DenseBody.
- voicefilter: Unofficial PyTorch implementation of Google AI’s VoiceFilter system http://swpark.me/voicefilter.
- NVIDIA/semantic-segmentation: A PyTorch Implementation of Improving Semantic Segmentation via Video Propagation and Label Relaxation, In CVPR2019.
- ClusterGCN: A PyTorch implementation of “Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks” (KDD 2019).
- NVlabs/DG-Net: A PyTorch implementation of “Joint Discriminative and Generative Learning for Person Re-identification” (CVPR19 Oral).
- NCRF: Cancer metastasis detection with neural conditional random field (NCRF)
- pytorch-sift: PyTorch implementation of SIFT descriptor.
- brain-segmentation-pytorch: U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI.
- glow-pytorch: PyTorch implementation of Glow, Generative Flow with Invertible 1×1 Convolutions (arxiv.org/abs/1807.03039)
- EfficientNets-PyTorch: A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.
- STEAL: STEAL – Learning Semantic Boundaries from Noisy Annotations nv-tlabs.github.io/STEAL
- EigenDamage-Pytorch: Official implementation of the ICML’19 paper “EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis”.
- Aspect-level-sentiment: Code and dataset for ACL2018 paper “Exploiting Document Knowledge for Aspect-level Sentiment Classification”
- breast_cancer_classifier: Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening arxiv.org/abs/1903.08297
- DGC-Net: A PyTorch implementation of “DGC-Net: Dense Geometric Correspondence Network”.
- universal-triggers: Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)
- Deep-Reinforcement-Learning-Algorithms-with-PyTorch: PyTorch implementations of deep reinforcement learning algorithms and environments.
- simple-effective-text-matching-pytorch: A pytorch implementation of the ACL2019 paper “Simple and Effective Text Matching with Richer Alignment Features”.
- Adaptive-segmentation-mask-attack (ASMA): A pytorch implementation of the MICCAI2019 paper “Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation”.
- NVIDIA/unsupervised-video-interpolation: A PyTorch Implementation of Unsupervised Video Interpolation Using Cycle Consistency, In ICCV 2019.
- Seg-Uncertainty: Unsupervised Scene Adaptation with Memory Regularization in vivo, In IJCAI 2020.
- pulse: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models
- distance-encoding: Distance-Encoding – Design Provably More PowerfulGNNs for Structural Representation Learning.
Talks & conferences
- PyTorch Conference 2018: First PyTorch developer conference at 2018.
Pytorch elsewhere
- the-incredible-pytorch: The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.
- generative models: Collection of generative models, e.g. GAN, VAE in Tensorflow, Keras, and Pytorch. http://wiseodd.github.io
- pytorch vs tensorflow: an informative thread on reddit.
- Pytorch discussion forum
- pytorch notebook: docker-stack: A project similar to Jupyter Notebook Scientific Python Stack
- drawlikebobross: Draw like Bob Ross using the power of Neural Networks (With PyTorch)!
- pytorch-tvmisc: Totally Versatile Miscellanea for Pytorch
- pytorch-a3c-mujoco: Implement A3C for Mujoco gym envs.
- PyTorch in 5 Minutes.
- pytorch_chatbot: A Marvelous ChatBot implemented using PyTorch.
- malmo-challenge: Malmo Collaborative AI Challenge – Team Pig Catcher
- sketchnet: A model that takes an image and generates Processing source code to regenerate that image
- Deep-Learning-Boot-Camp: A nonprofit community run, 5-day Deep Learning Bootcamp http://deep-ml.com.
- Amazon_Forest_Computer_Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks. kaggle competition.
- AlphaZero_Gomoku: An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)
- pytorch-cv: Repo for Object Detection, Segmentation & Pose Estimation.
- deep-person-reid: Pytorch implementation of deep person re-identification approaches.
- pytorch-template: PyTorch template project
- Deep Learning With Pytorch TextBook A practical guide to build neural network models in text and vision using PyTorch. Purchase on Amazon github code repo
- compare-tensorflow-pytorch: Compare outputs between layers written in Tensorflow and layers written in Pytorch.
- hasktorch: Tensors and neural networks in Haskell
- Deep Learning With Pytorch Deep Learning with PyTorch teaches you how to implement deep learning algorithms with Python and PyTorch.
- nimtorch: PyTorch – Python + Nim
- derplearning: Self Driving RC Car Code.
- pytorch-saltnet: Kaggle | 9th place single model solution for TGS Salt Identification Challenge.
- pytorch-scripts: A few Windows specific scripts for PyTorch.
- pytorch_misc: Code snippets created for the PyTorch discussion board.
- awesome-pytorch-scholarship: A list of awesome PyTorch scholarship articles, guides, blogs, courses and other resources.
- MentisOculi: A raytracer written in PyTorch (raynet?)
- DoodleMaster: “Don’t code your UI, Draw it !”
- ocaml-torch: OCaml bindings for PyTorch.
- extension-script: Example repository for custom C++/CUDA operators for TorchScript.
- pytorch-inference: PyTorch 1.0 inference in C++ on Windows10 platforms.
- pytorch-cpp-inference: Serving PyTorch 1.0 Models as a Web Server in C++.
- tch-rs: Rust bindings for PyTorch.
- TorchSharp: .NET bindings for the Pytorch engine
- ML Workspace: All-in-one web IDE for machine learning and data science. Combines Jupyter, VS Code, PyTorch, and many other tools/libraries into one Docker image.
- PyTorch Style Guide Style guide for PyTorch code. Consistent and good code style helps collaboration and prevents errors!
PyTorch – сверточные нейронные сети
Лекция: Сверточные нейронные сети
torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode=’zeros’)
Parameters
• in_channels (int) – Number of channels in the input image
• out_channels (int) – Number of channels produced by the convolution
• kernel_size (int or tuple) – Size of the convolving kernel
• stride (int or tuple, optional) – Stride of the convolution. (Default: 1)
• padding (int or tuple, optional) – Zero-padding added to both sides of the input (Default: 0)
• padding_mode (string, optional) – zeros
• dilation (int or tuple, optional) – Spacing between kernel elements. (Default: 1)
• groups (int, optional) – Number of blocked connections from input to output channels. (Default: 1)
• bias (bool, optional) – If True, adds a learnable bias to the output. (Default: True)
And this URL has helpful visualization of the process.
So the in_channels in the beginning is 3 for images with 3 channels (colored images). For images black and white it should be 1. Some satellite images should have 4.
The out_channels is what convolution will produce so these are the number of filters.
Let’s create an example to “prove” that.
import torch
import torch.nn as nn
c = nn.Conv2d(1,3, stride = 1, kernel_size=(4,5))
print(c.weight.shape)
print(c.weight)
Out
torch.Size([3, 1, 4, 5])
Parameter containing:
tensor([[[[ 0.1571, 0.0723, 0.0900, 0.1573, 0.0537],
[-0.1213, 0.0579, 0.0009, -0.1750, 0.1616],
[-0.0427, 0.1968, 0.1861, -0.1787, -0.2035],
[-0.0796, 0.1741, -0.2231, 0.2020, -0.1762]]],
[[[ 0.1811, 0.0660, 0.1653, 0.0605, 0.0417],
[ 0.1885, -0.0440, -0.1638, 0.1429, -0.0606],
[-0.1395, -0.1202, 0.0498, 0.0432, -0.1132],
[-0.2073, 0.1480, -0.1296, -0.1661, -0.0633]]],
[[[ 0.0435, -0.2017, 0.0676, -0.0711, -0.1972],
[ 0.0968, -0.1157, 0.1012, 0.0863, -0.1844],
[-0.2080, -0.1355, -0.1842, -0.0017, -0.2123],
[-0.1495, -0.2196, 0.1811, 0.1672, -0.1817]]]], requires_grad=True)
If we would alter the number of out_channels,
c = nn.Conv2d(1,5, stride = 1, kernel_size=(4,5))
print(c.weight.shape) # torch.Size([5, 1, 4, 5])
We will get 5 filters each filter 4×5 as this is our kernel size. If we would set 2 channels, (some images may have 2 channels only)
c = nn.Conv2d(2,5, stride = 1, kernel_size=(4,5))
print(c.weight.shape) # torch.Size([5, 2, 4, 5])
our filter will have 2 channels.
I think they have terms from this book and since in there they haven’t called it filters, they haven’t used that term.
So you are right; filters are the what conv layer is learning and the number of filters is the number of out channels. They are set random at start.
Number of activations is calculated based on bs and image dimension:
bs=16
x = torch.randn(bs, 3, 28, 28)
c = nn.Conv2d(3,10,kernel_size=5,stride=1,padding=2)
out = c(x)
print(out.nelement()) #125440 number of activations
https://www.programcreek.com/python/example/107691/torch.nn.Conv2d
CONV2D
CLASStorch.nn.Conv2d(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T, T]], stride: Union[T, Tuple[T, T]] = 1, padding: Union[T, Tuple[T, T]] = 0, dilation: Union[T, Tuple[T, T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = ‘zeros’)[SOURCE]
Applies a 2D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N, C_{\text{in}}, H, W)(N,Cin,H,W) and output (N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}})(N,Cout,Hout,Wout) can be precisely described as:
\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{\text{in}} – 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)out(Ni,Coutj)=bias(Coutj)+k=0∑Cin−1weight(Coutj,k)⋆input(Ni,k)
where \star⋆ is the valid 2D cross-correlation operator, NN is a batch size, CC denotes a number of channels, HH is a height of input planes in pixels, and WW is width in pixels.
• stride controls the stride for the cross-correlation, a single number or a tuple.
• padding controls the amount of implicit zero-paddings on both sides for padding number of points for each dimension.
• dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
• groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,
o At groups=1, all inputs are convolved to all outputs.
o At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
o At groups= in_channels, each input channel is convolved with its own set of filters, of size: \left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor⌊in_channelsout_channels⌋ .
The parameters kernel_size, stride, padding, dilation can either be:
• a single int – in which case the same value is used for the height and width dimension
• a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
NOTE
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
NOTE
When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also termed in literature as depthwise convolution.
In other words, for an input of size (N, C_{in}, H_{in}, W_{in})(N,Cin,Hin,Win) , a depthwise convolution with a depthwise multiplier K, can be constructed by arguments (in\_channels=C_{in}, out\_channels=C_{in} \times K, …, groups=C_{in})(in_channels=Cin,out_channels=Cin×K,…,groups=Cin) .
NOTE
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. Please see the notes on Reproducibility for background.
Parameters
• in_channels (int) – Number of channels in the input image
• out_channels (int) – Number of channels produced by the convolution
• kernel_size (int or tuple) – Size of the convolving kernel
• stride (int or tuple, optional) – Stride of the convolution. Default: 1
• padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
• padding_mode (string, optional) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’
• dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
• groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
• bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Shape:
• Input: (N, C_{in}, H_{in}, W_{in})(N,Cin,Hin,Win)
• Output: (N, C_{out}, H_{out}, W_{out})(N,Cout,Hout,Wout) where
H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] – \text{dilation}[0] \times (\text{kernel\_size}[0] – 1) – 1}{\text{stride}[0]} + 1\right\rfloorHout=⌊stride[0]Hin+2×padding[0]−dilation[0]×(kernel_size[0]−1)−1+1⌋
W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] – \text{dilation}[1] \times (\text{kernel\_size}[1] – 1) – 1}{\text{stride}[1]} + 1\right\rfloorWout=⌊stride[1]Win+2×padding[1]−dilation[1]×(kernel_size[1]−1)−1+1⌋
Variables
• ~Conv2d.weight (Tensor) – the learnable weights of the module of shape (\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},(out_channels,groupsin_channels, \text{kernel\_size[0]}, \text{kernel\_size[1]})kernel_size[0],kernel_size[1]) . The values of these weights are sampled from \mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k,k) where k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}k=Cin∗∏i=01kernel_size[i]groups
• ~Conv2d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from \mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k,k) where k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}k=Cin∗∏i=01kernel_size[i]groups
Examples
>>> # With square kernels and equal stride
>>> m = nn.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = torch.randn(20, 16, 50, 100)
>>> output = m(input)
К концу 2020 года библиотеку машинного обучения PyTorch будут развивать больше разработчиков, чем TensorFlow. PyTorch создана Facebook, TensorFlow — Google, обе с открытым кодом. TensorFlow считается фактическим стандартом, она появилась раньше, чем PyTorch. Но по данным сайта OpenHub, за последний год у обоих проектов было примерно одинаковое количество активных разработчиков. При этом сообщество пользователей TensorFlow гораздо крупнее, чем у PyTorch, но в научно-исследовательской среде библиотека Facebook вырвалась вперед и сейчас используется гораздо шире. Преимуществом PyTorch называют то, что это нативная библиотека для Python, языка программирования, который сейчас наиболее широко применяется для задач машинного обучения, тогда как для использования TensorFlow с Python предоставляется специальный API. Кроме того, в PyTorch применяется динамическая модель работы с графами, упрощающая программирование, хотя в TensorFlow, начиная с версии 2.0, тоже появилась аналогичная особенность.
Deep Learning (with PyTorch)
Week 1 – Lecture: History, motivation, and evolution of Deep Learning
Week 1 – Practicum: Classification, linear algebra, and visualisation
Week 2 – Lecture: Stochastic gradient descent and backpropagation
Week 3 – Lecture: Convolutional neural networks
Pytorch
yunjey pytorch-tutorial
deeplearningzerotoall PyTorch
zergtant pytorch-handbook
chenyuntc pytorch-book
pytorch_geometric
PyTorch-GAN
MorvanZhou PyTorch-Tutorial
ГЛУБОКОЕ ОБУЧЕНИЕ
Книга «Программируем с PyTorch: Создание приложений глубокого обучения»
Как подружить PyTorch и C++. Используем TorchScript
TorchScriptTutorial
Deep_Learning_for_Vision_Systems_by_Mohamed_Elgendy_z_lib_org.pdf
Введение в WSGI-серверы: Часть первая
Flask Documentation (1.1.x)
Настройка mod_wsgi (Apache) для Flask
WSGI Servers Full Stack Python
Нейронные сети и компьютерное зрение – 66 урок. Строим первую нейронную сеть
Нейронные сети и компьютерное зрение – 75 урок. Классификация в PyTorch
PyTorch-YOLOv3
YOLOv3-in-PyTorch
yolov3
torch.nn
Convolution Layers
Conv2d
A Beginner’s Guide To Understanding Convolutional Neural Networks 1
A Beginner’s Guide To Understanding Convolutional Neural Networks Part 2
CS231n: Convolutional Neural Networks for Visual Recognition
Understanding of Convolutional Neural Network (CNN) — Deep Learning
Padding and Stride
Нейронные сети и компьютерное зрение – 103 урок.Свёртка, каскад свёрток
Нейронные сети и компьютерное зрение – 104 урок.Свёртка, каскад свёрток
Нейронные сети и компьютерное зрение – 105 урок.Свёртка, каскад свёрток
Нейронные сети и компьютерное зрение – 110 урок. Архитектура LeNet (1998)
Нейронные сети и компьютерное зрение – 116 – 124 урок. AlexNet (2012) и VGG (2014)
Нейронные сети и компьютерное зрение – 131 урок. Распознавание рукописных чисел свёрточной сетью
Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Pytorch-how-and-when-to-use-Module-Sequential-ModuleList-and-ModuleDict
>>> # With square kernels and equal stride
>>> m = nn.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = torch.randn(20, 16, 50, 100)
>>> output = m(input)
def __init__(self, in_c, n_classes):
super().__init__()
self.conv1 = nn.Conv2d(in_c, 32, kernel_size=3, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(32)
self.bn2 = nn.BatchNorm2d(64)
self.fc2 = nn.Linear(1024, n_classes)
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.bn2(x)
x = F.relu(x)
x = F.sigmoid(x)
x = self.fc2(x)
print(model)
(conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(fc1): Linear(in_features=50176, out_features=1024, bias=True)
(fc2): Linear(in_features=1024, out_features=10, bias=True)
)
implementing-an-autoencoder-in-pytorch
Building Autoencoder in Pytorch
L1aoXingyu pytorch-beginner
kaggle.com autoencoders-with-pytorch
pytorch-beginner 08-AutoEncoder
kaggle Convolutional Autoencoder
Denoising-Autoencoder-in-Pytorch
github.com Autoencoders+pytorch
torch_mobile flutter plugin
Pytorch3d
PyTorch GitHub
PyTorch Geometric
NVIDIA выпустили обертку над PyTorch для обучения моделей
Какие ошибки чаще всего совершают при обучении нейросетей
StyleGAN2: улучшенная нейросеть для генерации лиц людей
StyleGAN2 — Official TensorFlow Implementation
PyTorch — Краткое руководство 2019
Понимание PyTorch на примере
PyTorch Tutorial: How to Develop Deep Learning Models with Python
Parameters
• in_channels (int) – Number of channels in the input image
• out_channels (int) – Number of channels produced by the convolution
• kernel_size (int or tuple) – Size of the convolving kernel
• stride (int or tuple, optional) – Stride of the convolution. (Default: 1)
• padding (int or tuple, optional) – Zero-padding added to both sides of the input (Default: 0)
• padding_mode (string, optional) – zeros
• dilation (int or tuple, optional) – Spacing between kernel elements. (Default: 1)
• groups (int, optional) – Number of blocked connections from input to output channels. (Default: 1)
• bias (bool, optional) – If True, adds a learnable bias to the output. (Default: True)
And this URL has helpful visualization of the process.
So the in_channels in the beginning is 3 for images with 3 channels (colored images). For images black and white it should be 1. Some satellite images should have 4.
The out_channels is what convolution will produce so these are the number of filters.
Let’s create an example to “prove” that.
import torch
import torch.nn as nn
c = nn.Conv2d(1,3, stride = 1, kernel_size=(4,5))
print(c.weight.shape)
print(c.weight)
Out
torch.Size([3, 1, 4, 5])
PyTorch at Tesla – Andrej Karpathy, Tesla
Pytorch Bidirectional LSTM example
Популярность PyTorch в среднем выросла на 243% за год
- pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration.
- pytorch vision: Datasets, Transforms and Models specific to Computer Vision.
- pt-styletransfer: Neural style transfer as a class in PyTorch.
- OpenFacePytorch: PyTorch module to use OpenFace’s nn4.small2.v1.t7 model
- img_classification_pk_pytorch: Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, …)
- SparseConvNet: Submanifold sparse convolutional networks.
- Convolution_LSTM_pytorch: A multi-layer convolution LSTM module
- face-alignment: ? 2D and 3D Face alignment library build using pytorch adrianbulat.com
- pytorch-semantic-segmentation: PyTorch for Semantic Segmentation.
- RoIAlign.pytorch: This is a PyTorch version of RoIAlign. This implementation is based on crop_and_resize and supports both forward and backward on CPU and GPU.
- pytorch-cnn-finetune: Fine-tune pretrained Convolutional Neural Networks with PyTorch.
- detectorch: Detectorch – detectron for PyTorch
- Augmentor: Image augmentation library in Python for machine learning. http://augmentor.readthedocs.io
- s2cnn: This library contains a PyTorch implementation of the SO(3) equivariant CNNs for spherical signals (e.g. omnidirectional cameras, signals on the globe)
- TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision.
- maskrcnn-benchmark: Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
- image-classification-mobile: Collection of classification models pretrained on the ImageNet-1K.
- medicaltorch: A medical imaging framework for Pytorch http://medicaltorch.readthedocs.io
- albumentations: Fast image augmentation library.
- kornia: Differentiable computer vision library.
- pytorch-text-recognition: Text recognition combo – CRAFT + CRNN.
- facenet-pytorch: Pretrained Pytorch face detection and recognition models ported from davidsandberg/facenet.
- detectron2: Detectron2 is FAIR’s next-generation research platform for object detection and segmentation.
- vedaseg: A semantic segmentation framework by pyotrch
- ClassyVision: An end-to-end PyTorch framework for image and video classification.
- detecto:Computer vision in Python with less than 10 lines of code
- pytorch3d: PyTorch3D is FAIR’s library of reusable components for deep learning with 3D data pytorch3d.org
- MMDetection: MMDetection is an open source object detection toolbox, a part of the OpenMMLab project.
- neural-dream: A PyTorch implementation of the DeepDream algorithm. Creates dream-like hallucinogenic visuals.
- FlashTorch: Visualization toolkit for neural networks in PyTorch!
- Lucent: Tensorflow and OpenAI Clarity’s Lucid adapted for PyTorch.
- MMDetection3D: MMDetection3D is OpenMMLab’s next-generation platform for general 3D object detection, a part of the OpenMMLab project.
- MMSegmentation: MMSegmentation is a semantic segmentation toolbox and benchmark, a part of the OpenMMLab project.
- MMEditing: MMEditing is a image and video editing toolbox, a part of the OpenMMLab project.
- MMAction2: MMAction2 is OpenMMLab’s next generation action understanding toolbox and benchmark, a part of the OpenMMLab project.
- MMPose: MMPose is a pose estimation toolbox and benchmark, a part of the OpenMMLab project.
Probabilistic/Generative Libraries:
- ptstat: Probabilistic Programming and Statistical Inference in PyTorch
- pyro: Deep universal probabilistic programming with Python and PyTorch http://pyro.ai
- probtorch: Probabilistic Torch is library for deep generative models that extends PyTorch.
- paysage: Unsupervised learning and generative models in python/pytorch.
- pyvarinf: Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch.
- pyprob: A PyTorch-based library for probabilistic programming and inference compilation.
- mia: A library for running membership inference attacks against ML models.
- pro_gan_pytorch: ProGAN package implemented as an extension of PyTorch nn.Module.
- botorch: Bayesian optimization in PyTorch
model.summary()
in Keras- Practical Pytorch: Tutorials explaining different RNN models
- DeepLearningForNLPInPytorch: An IPython Notebook tutorial on deep learning, with an emphasis on Natural Language Processing.
- pytorch-tutorial: tutorial for researchers to learn deep learning with pytorch.
- pytorch-exercises: pytorch-exercises collection.
- pytorch tutorials: Various pytorch tutorials.
- pytorch examples: A repository showcasing examples of using pytorch
- pytorch practice: Some example scripts on pytorch.
- pytorch mini tutorials: Minimal tutorials for PyTorch adapted from Alec Radford’s Theano tutorials.
- pytorch text classification: A simple implementation of CNN based text classification in Pytorch
- cats vs dogs: Example of network fine-tuning in pytorch for the kaggle competition Dogs vs. Cats Redux: Kernels Edition. Currently #27 (0.05074) on the leaderboard.
- convnet: This is a complete training example for Deep Convolutional Networks on various datasets (ImageNet, Cifar10, Cifar100, MNIST).
- pytorch-generative-adversarial-networks: simple generative adversarial network (GAN) using PyTorch.
- pytorch containers: This repository aims to help former Torchies more seamlessly transition to the “Containerless” world of PyTorch by providing a list of PyTorch implementations of Torch Table Layers.
- T-SNE in pytorch: t-SNE experiments in pytorch
- AAE_pytorch: Adversarial Autoencoders (with Pytorch).
- Kind_PyTorch_Tutorial: Kind PyTorch Tutorial for beginners.
- pytorch-poetry-gen: a char-RNN based on pytorch.
- pytorch-REINFORCE: PyTorch implementation of REINFORCE, This repo supports both continuous and discrete environments in OpenAI gym.
- PyTorch-Tutorial: Build your neural network easy and fast https://morvanzhou.github.io/tutorials/
- pytorch-intro: A couple of scripts to illustrate how to do CNNs and RNNs in PyTorch
- pytorch-classification: A unified framework for the image classification task on CIFAR-10/100 and ImageNet.
- pytorch_notebooks – hardmaru: Random tutorials created in NumPy and PyTorch.
- pytorch_tutoria-quick: Quick PyTorch introduction and tutorial. Targets computer vision, graphics and machine learning researchers eager to try a new framework.
- Pytorch_fine_tuning_Tutorial: A short tutorial on performing fine tuning or transfer learning in PyTorch.
- pytorch_exercises: pytorch-exercises
- traffic-sign-detection: nyu-cv-fall-2017 example
- mss_pytorch: Singing Voice Separation via Recurrent Inference and Skip-Filtering Connections – PyTorch Implementation. Demo: js-mim.github.io/mss_pytorch
- DeepNLP-models-Pytorch Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ: NLP with Deep Learning)
- Mila introductory tutorials: Various tutorials given for welcoming new students at MILA.
- pytorch.rl.learning: for learning reinforcement learning using PyTorch.
- minimal-seq2seq: Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch
- tensorly-notebooks: Tensor methods in Python with TensorLy tensorly.github.io/dev
- pytorch_bits: time-series prediction related examples.
- skip-thoughts: An implementation of Skip-Thought Vectors in PyTorch.
- video-caption-pytorch: pytorch code for video captioning.
- Capsule-Network-Tutorial: Pytorch easy-to-follow Capsule Network tutorial.
- code-of-learn-deep-learning-with-pytorch: This is code of book “Learn Deep Learning with PyTorch” item.jd.com/17915495606.html
- RL-Adventure: Pytorch easy-to-follow step-by-step Deep Q Learning tutorial with clean readable code.
- accelerated_dl_pytorch: Accelerated Deep Learning with PyTorch at Jupyter Day Atlanta II.
- RL-Adventure-2: PyTorch4 tutorial of: actor critic / proximal policy optimization / acer / ddpg / twin dueling ddpg / soft actor critic / generative adversarial imitation learning / hindsight experience replay
- Generative Adversarial Networks (GANs) in 50 lines of code (PyTorch)
- adversarial-autoencoders-with-pytorch
- transfer learning using pytorch
- how-to-implement-a-yolo-object-detector-in-pytorch
- pytorch-for-recommenders-101
- pytorch-for-numpy-users
- PyTorch Tutorial: PyTorch Tutorials in Chinese.
- grokking-pytorch: The Hitchiker’s Guide to PyTorch
- PyTorch-Deep-Learning-Minicourse: Minicourse in Deep Learning with PyTorch.
- pytorch-custom-dataset-examples: Some custom dataset examples for PyTorch
- Multiplicative LSTM for sequence-based Recommenders
- deeplearning.ai-pytorch: PyTorch Implementations of Coursera’s Deep Learning(deeplearning.ai) Specialization.
- MNIST_Pytorch_python_and_capi: This is an example of how to train a MNIST network in Python and run it in c++ with pytorch 1.0
- torch_light: Tutorials and examples include Reinforcement Training, NLP, CV
- portrain-gan: torch code to decode (and almost encode) latents from art-DCGAN’s Portrait GAN.
- mri-analysis-pytorch: MRI analysis using PyTorch and MedicalTorch
- cifar10-fast: Demonstration of training a small ResNet on CIFAR10 to 94% test accuracy in 79 seconds as described in this blog series.
- Intro to Deep Learning with PyTorch: A free course by Udacity and facebook, with a good intro to PyTorch, and an interview with Soumith Chintala, one of the original authors of PyTorch.
- pytorch-sentiment-analysis: Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
- pytorch-image-models: PyTorch image models, scripts, pretrained weights — (SE)ResNet/ResNeXT, DPN, EfficientNet, MobileNet-V3/V2/V1, MNASNet, Single-Path NAS, FBNet, and more.
- CIFAR-ZOO: Pytorch implementation for multiple CNN architectures and improve methods with state-of-the-art results.
- d2l-pytorch: This is an attempt to modify Dive into Deep Learning, Berkeley STAT 157 (Spring 2019) textbook’s code into PyTorch.
- thinking-in-tensors-writing-in-pytorch: Thinking in tensors, writing in PyTorch (a hands-on deep learning intro).
- NER-BERT-pytorch: PyTorch solution of named entity recognition task Using Google AI’s pre-trained BERT model.
- pytorch-sync-batchnorm-example: How to use Cross Replica / Synchronized Batchnorm in Pytorch.
- SentimentAnalysis: Sentiment analysis neural network trained by fine tuning BERT on the Stanford Sentiment Treebank, thanks to Hugging Face‘s Transformers library.
- pytorch-cpp: C++ implementations of PyTorch tutorials for deep learning researchers (based on the Python tutorials from pytorch-tutorial).
- Deep Learning with PyTorch: Zero to GANs: Interactive and coding-focused tutorial series on introduction to Deep Learning with PyTorch (video).
- Deep Learning with PyTorch: Deep Learning with PyTorch teaches you how to implement deep learning algorithms with Python and PyTorch, the book includes a case study: building an algorithm capable of detecting malignant lung tumors using CT scans.
- Serverless Machine Learning in Action with PyTorch and AWS: Serverless Machine Learning in Action is a guide to bringing your experimental PyTorch machine learning code to production using serverless capabilities from major cloud providers like AWS, Azure, or GCP.
Paper implementations
- google_evolution: This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al.
- pyscatwave: Fast Scattering Transform with CuPy/PyTorch,read the paper here
- scalingscattering: Scaling The Scattering Transform : Deep Hybrid Networks.
- deep-auto-punctuation: a pytorch implementation of auto-punctuation learned character by character.
- Realtime_Multi-Person_Pose_Estimation: This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here .
- PyTorch-value-iteration-networks: PyTorch implementation of the Value Iteration Networks (NIPS ’16) paper
- pytorch_Highway: Highway network implemented in pytorch.
- pytorch_NEG_loss: NEG loss implemented in pytorch.
- pytorch_RVAE: Recurrent Variational Autoencoder that generates sequential data implemented in pytorch.
- pytorch_TDNN: Time Delayed NN implemented in pytorch.
- eve.pytorch: An implementation of Eve Optimizer, proposed in Imploving Stochastic Gradient Descent with Feedback, Koushik and Hayashi, 2016.
- e2e-model-learning: Task-based end-to-end model learning.
- pix2pix-pytorch: PyTorch implementation of “Image-to-Image Translation Using Conditional Adversarial Networks”.
- Single Shot MultiBox Detector: A PyTorch Implementation of Single Shot MultiBox Detector.
- DiscoGAN: PyTorch implementation of “Learning to Discover Cross-Domain Relations with Generative Adversarial Networks”
- official DiscoGAN implementation: Official implementation of “Learning to Discover Cross-Domain Relations with Generative Adversarial Networks”.
- pytorch-es: This is a PyTorch implementation of Evolution Strategies .
- piwise: Pixel-wise segmentation on VOC2012 dataset using pytorch.
- pytorch-dqn: Deep Q-Learning Network in pytorch.
- neuraltalk2-pytorch: image captioning model in pytorch(finetunable cnn in branch with_finetune)
- vnet.pytorch: A Pytorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation.
- pytorch-fcn: PyTorch implementation of Fully Convolutional Networks.
- WideResNets: WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than what is required by the official Torch implementation: https://github.com/szagoruyko/wide-residual-networks .
- pytorch_highway_networks: Highway networks implemented in PyTorch.
- pytorch-NeuCom: Pytorch implementation of DeepMind’s differentiable neural computer paper.
- captionGen: Generate captions for an image using PyTorch.
- AnimeGAN: A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing.
- Cnn-text classification: This is the implementation of Kim’s Convolutional Neural Networks for Sentence Classification paper in PyTorch.
- deepspeech2: Implementation of DeepSpeech2 using Baidu Warp-CTC. Creates a network based on the DeepSpeech2 architecture, trained with the CTC activation function.
- seq2seq: This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch
- Asynchronous Advantage Actor-Critic in PyTorch: This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learning. Since PyTorch has a easy method to control shared memory within multiprocess, we can easily implement asynchronous method like A3C.
- densenet: This is a PyTorch implementation of the DenseNet-BC architecture as described in the paper Densely Connected Convolutional Networks by G. Huang, Z. Liu, K. Weinberger, and L. van der Maaten. This implementation gets a CIFAR-10+ error rate of 4.77 with a 100-layer DenseNet-BC with a growth rate of 12. Their official implementation and links to many other third-party implementations are available in the liuzhuang13/DenseNet repo on GitHub.
- nninit: Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin.
- faster rcnn: This is a PyTorch implementation of Faster RCNN. This project is mainly based on py-faster-rcnn and TFFRCNN.For details about R-CNN please refer to the paper Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks by Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun.
- doomnet: PyTorch’s version of Doom-net implementing some RL models in ViZDoom environment.
- flownet: Pytorch implementation of FlowNet by Dosovitskiy et al.
- sqeezenet: Implementation of Squeezenet in pytorch, #### pretrained models on CIFAR10 data to come Plan to train the model on cifar 10 and add block connections too.
- WassersteinGAN: wassersteinGAN in pytorch.
- optnet: This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch source code to reproduce the experiments in our paper OptNet: Differentiable Optimization as a Layer in Neural Networks.
- qp solver: A fast and differentiable QP solver for PyTorch. Crafted by Brandon Amos and J. Zico Kolter.
- Continuous Deep Q-Learning with Model-based Acceleration : Reimplementation of Continuous Deep Q-Learning with Model-based Acceleration.
- Learning to learn by gradient descent by gradient descent: PyTorch implementation of Learning to learn by gradient descent by gradient descent.
- fast-neural-style: pytorch implementation of fast-neural-style, The model uses the method described in Perceptual Losses for Real-Time Style Transfer and Super-Resolution along with Instance Normalization.
- PytorchNeuralStyleTransfer: Implementation of Neural Style Transfer in Pytorch.
- Fast Neural Style for Image Style Transform by Pytorch: Fast Neural Style for Image Style Transform by Pytorch .
- neural style transfer: An introduction to PyTorch through the Neural-Style algorithm (https://arxiv.org/abs/1508.06576) developed by Leon A. Gatys, Alexander S. Ecker and Matthias Bethge.
- VIN_PyTorch_Visdom: PyTorch implementation of Value Iteration Networks (VIN): Clean, Simple and Modular. Visualization in Visdom.
- YOLO2: YOLOv2 in PyTorch.
- attention-transfer: Attention transfer in pytorch, read the paper here.
- SVHNClassifier: A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks.
- pytorch-deform-conv: PyTorch implementation of Deformable Convolution.
- BEGAN-pytorch: PyTorch implementation of BEGAN: Boundary Equilibrium Generative Adversarial Networks.
- treelstm.pytorch: Tree LSTM implementation in PyTorch.
- AGE: Code for paper “Adversarial Generator-Encoder Networks” by Dmitry Ulyanov, Andrea Vedaldi and Victor Lempitsky which can be found here
- ResNeXt.pytorch: Reproduces ResNet-V3 (Aggregated Residual Transformations for Deep Neural Networks) with pytorch.
- pytorch-rl: Deep Reinforcement Learning with pytorch & visdom
- Deep-Leafsnap: LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.
- pytorch-CycleGAN-and-pix2pix: PyTorch implementation for both unpaired and paired image-to-image translation.
- A3C-PyTorch:PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch
- pytorch-value-iteration-networks: Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)
- PyTorch-Style-Transfer: PyTorch Implementation of Multi-style Generative Network for Real-time Transfer
- pytorch-deeplab-resnet: pytorch-deeplab-resnet-model.
- pointnet.pytorch: pytorch implementation for “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation” https://arxiv.org/abs/1612.00593
- pytorch-playground: Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet).
- pytorch-dnc: Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom.
- pytorch_image_classifier: Minimal But Practical Image Classifier Pipline Using Pytorch, Finetune on ResNet18, Got 99% Accuracy on Own Small Datasets.
- mnist-svhn-transfer: PyTorch Implementation of CycleGAN and SGAN for Domain Transfer (Minimal).
- pytorch-yolo2: pytorch-yolo2
- dni: Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch
- wgan-gp: A pytorch implementation of Paper “Improved Training of Wasserstein GANs”.
- pytorch-seq2seq-intent-parsing: Intent parsing and slot filling in PyTorch with seq2seq + attention
- pyTorch_NCE: An implementation of the Noise Contrastive Estimation algorithm for pyTorch. Working, yet not very efficient.
- molencoder: Molecular AutoEncoder in PyTorch
- GAN-weight-norm: Code for “On the Effects of Batch and Weight Normalization in Generative Adversarial Networks”
- lgamma: Implementations of polygamma, lgamma, and beta functions for PyTorch
- bigBatch: Code used to generate the results appearing in “Train longer, generalize better: closing the generalization gap in large batch training of neural networks”
- rl_a3c_pytorch: Reinforcement learning with implementation of A3C LSTM for Atari 2600.
- pytorch-retraining: Transfer Learning Shootout for PyTorch’s model zoo (torchvision)
- nmp_qc: Neural Message Passing for Computer Vision
- grad-cam: Pytorch implementation of Grad-CAM
- pytorch-trpo: PyTorch Implementation of Trust Region Policy Optimization (TRPO)
- pytorch-explain-black-box: PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation
- vae_vpflows: Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling https://jmtomczak.github.io/deebmed.html
- relational-networks: Pytorch implementation of “A simple neural network module for relational reasoning” (Relational Networks) https://arxiv.org/pdf/1706.01427.pdf
- vqa.pytorch: Visual Question Answering in Pytorch
- end-to-end-negotiator: Deal or No Deal? End-to-End Learning for Negotiation Dialogues
- odin-pytorch: Principled Detection of Out-of-Distribution Examples in Neural Networks.
- FreezeOut: Accelerate Neural Net Training by Progressively Freezing Layers.
- ARAE: Code for the paper “Adversarially Regularized Autoencoders for Generating Discrete Structures” by Zhao, Kim, Zhang, Rush and LeCun.
- forward-thinking-pytorch: Pytorch implementation of “Forward Thinking: Building and Training Neural Networks One Layer at a Time” https://arxiv.org/pdf/1706.02480.pdf
- context_encoder_pytorch: PyTorch Implement of Context Encoders
- attention-is-all-you-need-pytorch: A PyTorch implementation of the Transformer model in “Attention is All You Need”.https://github.com/thnkim/OpenFacePytorch
- OpenFacePytorch: PyTorch module to use OpenFace’s nn4.small2.v1.t7 model
- neural-combinatorial-rl-pytorch: PyTorch implementation of Neural Combinatorial Optimization with Reinforcement Learning.
- pytorch-nec: PyTorch Implementation of Neural Episodic Control (NEC)
- seq2seq.pytorch: Sequence-to-Sequence learning using PyTorch
- Pytorch-Sketch-RNN: a pytorch implementation of arxiv.org/abs/1704.03477
- pytorch-pruning: PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
- DrQA: A pytorch implementation of Reading Wikipedia to Answer Open-Domain Questions.
- YellowFin_Pytorch: auto-tuning momentum SGD optimizer
- samplernn-pytorch: PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model.
- AEGeAN: Deeper DCGAN with AE stabilization
- /pytorch-SRResNet: pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802v2
- vsepp: Code for the paper “VSE++: Improved Visual Semantic Embeddings”
- Pytorch-DPPO: Pytorch implementation of Distributed Proximal Policy Optimization: arxiv.org/abs/1707.02286
- UNIT: PyTorch Implementation of our Coupled VAE-GAN algorithm for Unsupervised Image-to-Image Translation
- efficient_densenet_pytorch: A memory-efficient implementation of DenseNets
- tsn-pytorch: Temporal Segment Networks (TSN) in PyTorch.
- SMASH: An experimental technique for efficiently exploring neural architectures.
- pytorch-retinanet: RetinaNet in PyTorch
- biogans: Implementation supporting the ICCV 2017 paper “GANs for Biological Image Synthesis”.
- Semantic Image Synthesis via Adversarial Learning: A PyTorch implementation of the paper “Semantic Image Synthesis via Adversarial Learning” in ICCV 2017.
- fmpytorch: A PyTorch implementation of a Factorization Machine module in cython.
- ORN: A PyTorch implementation of the paper “Oriented Response Networks” in CVPR 2017.
- pytorch-maml: PyTorch implementation of MAML: arxiv.org/abs/1703.03400
- pytorch-generative-model-collections: Collection of generative models in Pytorch version.
- vqa-winner-cvprw-2017: Pytorch Implementation of winner from VQA Chllange Workshop in CVPR’17.
- tacotron_pytorch: PyTorch implementation of Tacotron speech synthesis model.
- pspnet-pytorch: PyTorch implementation of PSPNet segmentation network
- LM-LSTM-CRF: Empower Sequence Labeling with Task-Aware Language Model http://arxiv.org/abs/1709.04109
- face-alignment: Pytorch implementation of the paper “How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)”, ICCV 2017
- DepthNet: PyTorch DepthNet Training on Still Box dataset.
- EDSR-PyTorch: PyTorch version of the paper ‘Enhanced Deep Residual Networks for Single Image Super-Resolution’ (CVPRW 2017)
- e2c-pytorch: Embed to Control implementation in PyTorch.
- 3D-ResNets-PyTorch: 3D ResNets for Action Recognition.
- bandit-nmt: This is code repo for our EMNLP 2017 paper “Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback”, which implements the A2C algorithm on top of a neural encoder-decoder model and benchmarks the combination under simulated noisy rewards.
- pytorch-a2c-ppo-acktr: PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO) and Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR).
- zalando-pytorch: Various experiments on the Fashion-MNIST dataset from Zalando.
- sphereface_pytorch: A PyTorch Implementation of SphereFace.
- Categorical DQN: A PyTorch Implementation of Categorical DQN from A Distributional Perspective on Reinforcement Learning.
- pytorch-ntm: pytorch ntm implementation.
- mask_rcnn_pytorch: Mask RCNN in PyTorch.
- graph_convnets_pytorch: PyTorch implementation of graph ConvNets, NIPS’16
- pytorch-faster-rcnn: A pytorch implementation of faster RCNN detection framework based on Xinlei Chen’s tf-faster-rcnn.
- torchMoji: A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc.
- semantic-segmentation-pytorch: Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset
- pytorch-qrnn: PyTorch implementation of the Quasi-Recurrent Neural Network – up to 16 times faster than NVIDIA’s cuDNN LSTM
- pytorch-sgns: Skipgram Negative Sampling in PyTorch.
- SfmLearner-Pytorch : Pytorch version of SfmLearner from Tinghui Zhou et al.
- deformable-convolution-pytorch: PyTorch implementation of Deformable Convolution.
- skip-gram-pytorch: A complete pytorch implementation of skipgram model (with subsampling and negative sampling). The embedding result is tested with Spearman’s rank correlation.
- stackGAN-v2: Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks by Han Zhang*, Tao Xu*, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris Metaxas.
- self-critical.pytorch: Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning.
- pygcn: Graph Convolutional Networks in PyTorch.
- dnc: Differentiable Neural Computers, for Pytorch
- prog_gans_pytorch_inference: PyTorch inference for “Progressive Growing of GANs” with CelebA snapshot.
- pytorch-capsule: Pytorch implementation of Hinton’s Dynamic Routing Between Capsules.
- PyramidNet-PyTorch: A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks, arxiv.org/abs/1610.02915)
- radio-transformer-networks: A PyTorch implementation of Radio Transformer Networks from the paper “An Introduction to Deep Learning for the Physical Layer”. arxiv.org/abs/1702.00832
- honk: PyTorch reimplementation of Google’s TensorFlow CNNs for keyword spotting.
- DeepCORAL: A PyTorch implementation of ‘Deep CORAL: Correlation Alignment for Deep Domain Adaptation.’, ECCV 2016
- pytorch-pose: A PyTorch toolkit for 2D Human Pose Estimation.
- lang-emerge-parlai: Implementation of EMNLP 2017 Paper “Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog” using PyTorch and ParlAI
- Rainbow: Rainbow: Combining Improvements in Deep Reinforcement Learning
- pytorch_compact_bilinear_pooling v1: This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.
- CompactBilinearPooling-Pytorch v2: (Yang Gao, et al.) A Pytorch Implementation for Compact Bilinear Pooling.
- FewShotLearning: Pytorch implementation of the paper “Optimization as a Model for Few-Shot Learning”
- meProp: Codes for “meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting”.
- SFD_pytorch: A PyTorch Implementation of Single Shot Scale-invariant Face Detector.
- GradientEpisodicMemory: Continuum Learning with GEM: Gradient Episodic Memory. https://arxiv.org/abs/1706.08840
- DeblurGAN: Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks.
- StarGAN: StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tranlsation.
- CapsNet-pytorch: PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules.
- CondenseNet: CondenseNet: An Efficient DenseNet using Learned Group Convolutions.
- deep-image-prior: Image restoration with neural networks but without learning.
- deep-head-pose: Deep Learning Head Pose Estimation using PyTorch.
- Random-Erasing: This code has the source code for the paper “Random Erasing Data Augmentation”.
- FaderNetworks: Fader Networks: Manipulating Images by Sliding Attributes – NIPS 2017
- FlowNet 2.0: FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
- pix2pixHD: Synthesizing and manipulating 2048×1024 images with conditional GANs tcwang0509.github.io/pix2pixHD
- pytorch-smoothgrad: SmoothGrad implementation in PyTorch
- RetinaNet: An implementation of RetinaNet in PyTorch.
- faster-rcnn.pytorch: This project is a faster faster R-CNN implementation, aimed to accelerating the training of faster R-CNN object detection models.
- mixup_pytorch: A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch.
- inplace_abn: In-Place Activated BatchNorm for Memory-Optimized Training of DNNs
- pytorch-pose-hg-3d: PyTorch implementation for 3D human pose estimation
- nmn-pytorch: Neural Module Network for VQA in Pytorch.
- bytenet: Pytorch implementation of bytenet from “Neural Machine Translation in Linear Time” paper
- bottom-up-attention-vqa: vqa, bottom-up-attention, pytorch
- yolo2-pytorch: The YOLOv2 is one of the most popular one-stage object detector. This project adopts PyTorch as the developing framework to increase productivity, and utilize ONNX to convert models into Caffe 2 to benifit engineering deployment.
- reseg-pytorch: PyTorch Implementation of ReSeg (arxiv.org/pdf/1511.07053.pdf)
- binary-stochastic-neurons: Binary Stochastic Neurons in PyTorch.
- pytorch-pose-estimation: PyTorch Implementation of Realtime Multi-Person Pose Estimation project.
- interaction_network_pytorch: Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics.
- NoisyNaturalGradient: Pytorch Implementation of paper “Noisy Natural Gradient as Variational Inference”.
- ewc.pytorch: An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural networks 2016(10.1073/pnas.1611835114).
- pytorch-zssr: PyTorch implementation of 1712.06087 “Zero-Shot” Super-Resolution using Deep Internal Learning
- deep_image_prior: An implementation of image reconstruction methods from Deep Image Prior (Ulyanov et al., 2017) in PyTorch.
- pytorch-transformer: pytorch implementation of Attention is all you need.
- DeepRL-Grounding: This is a PyTorch implementation of the AAAI-18 paper Gated-Attention Architectures for Task-Oriented Language Grounding
- deep-forecast-pytorch: Wind Speed Prediction using LSTMs in PyTorch (arxiv.org/pdf/1707.08110.pdf)
- cat-net: Canonical Appearance Transformations
- minimal_glo: Minimal PyTorch implementation of Generative Latent Optimization from the paper “Optimizing the Latent Space of Generative Networks”
- LearningToCompare-Pytorch: Pytorch Implementation for Paper: Learning to Compare: Relation Network for Few-Shot Learning.
- poincare-embeddings: PyTorch implementation of the NIPS-17 paper “Poincaré Embeddings for Learning Hierarchical Representations”.
- pytorch-trpo(Hessian-vector product version): This is a PyTorch implementation of “Trust Region Policy Optimization (TRPO)” with exact Hessian-vector product instead of finite differences approximation.
- ggnn.pytorch: A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN).
- visual-interaction-networks-pytorch: This’s an implementation of deepmind Visual Interaction Networks paper using pytorch
- adversarial-patch: PyTorch implementation of adversarial patch.
- Prototypical-Networks-for-Few-shot-Learning-PyTorch: Implementation of Prototypical Networks for Few Shot Learning (arxiv.org/abs/1703.05175) in Pytorch
- Visual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch: Implementation of Visual Feature Attribution using Wasserstein GANs (arxiv.org/abs/1711.08998) in PyTorch.
- PhotographicImageSynthesiswithCascadedRefinementNetworks-Pytorch: Photographic Image Synthesis with Cascaded Refinement Networks – Pytorch Implementation
- ENAS-pytorch: PyTorch implementation of “Efficient Neural Architecture Search via Parameters Sharing”.
- Neural-IMage-Assessment: A PyTorch Implementation of Neural IMage Assessment.
- proxprop: Proximal Backpropagation – a neural network training algorithm that takes implicit instead of explicit gradient steps.
- FastPhotoStyle: A Closed-form Solution to Photorealistic Image Stylization
- Deep-Image-Analogy-PyTorch: A python implementation of Deep-Image-Analogy based on pytorch.
- Person-reID_pytorch: PyTorch for Person re-ID.
- pt-dilate-rnn: Dilated RNNs in pytorch.
- pytorch-i-revnet: Pytorch implementation of i-RevNets.
- OrthNet: TensorFlow and PyTorch layers for generating Orthogonal Polynomials.
- DRRN-pytorch: An implementation of Deep Recursive Residual Network for Super Resolution (DRRN), CVPR 2017
- shampoo.pytorch: An implementation of shampoo.
- Neural-IMage-Assessment 2: A PyTorch Implementation of Neural IMage Assessment.
- TCN: Sequence modeling benchmarks and temporal convolutional networks locuslab/TCN
- DCC: This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper.
- packnet: Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning arxiv.org/abs/1711.05769
- PyTorch-progressive_growing_of_gans: PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.
- nonauto-nmt: PyTorch Implementation of “Non-Autoregressive Neural Machine Translation”
- PyTorch-GAN: PyTorch implementations of Generative Adversarial Networks.
- PyTorchWavelets: PyTorch implementation of the wavelet analysis found in Torrence and Compo (1998)
- pytorch-made: MADE (Masked Autoencoder Density Estimation) implementation in PyTorch
- VRNN: Pytorch implementation of the Variational RNN (VRNN), from A Recurrent Latent Variable Model for Sequential Data.
- flow: Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.
- deepvoice3_pytorch: PyTorch implementation of convolutional networks-based text-to-speech synthesis models
- psmm: imlementation of the the Pointer Sentinel Mixture Model, as described in the paper by Stephen Merity et al.
- tacotron2: Tacotron 2 – PyTorch implementation with faster-than-realtime inference.
- AccSGD: Implements pytorch code for the Accelerated SGD algorithm.
- QANet-pytorch: an implementation of QANet with PyTorch (EM/F1 = 70.5/77.2 after 20 epoches for about 20 hours on one 1080Ti card.)
- ConvE: Convolutional 2D Knowledge Graph Embeddings
- Structured-Self-Attention: Implementation for the paper A Structured Self-Attentive Sentence Embedding, which is published in ICLR 2017: arxiv.org/abs/1703.03130 .
- graphsage-simple: Simple reference implementation of GraphSAGE.
- Detectron.pytorch: A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.
- R2Plus1D-PyTorch: PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper “A Closer Look at Spatiotemporal Convolutions for Action Recognition”
- StackNN: A PyTorch implementation of differentiable stacks for use in neural networks.
- translagent: Code for Emergent Translation in Multi-Agent Communication.
- ban-vqa: Bilinear attention networks for visual question answering.
- pytorch-openai-transformer-lm: This is a PyTorch implementation of the TensorFlow code provided with OpenAI’s paper “Improving Language Understanding by Generative Pre-Training” by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
- T2F: Text-to-Face generation using Deep Learning. This project combines two of the recent architectures StackGAN and ProGAN for synthesizing faces from textual descriptions.
- pytorch – fid: A Port of Fréchet Inception Distance (FID score) to PyTorch
- vae_vpflows:Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling jmtomczak.github.io/deebmed.html
- CoordConv-pytorch: Pytorch implementation of CoordConv introduced in ‘An intriguing failing of convolutional neural networks and the CoordConv solution’ paper. (arxiv.org/pdf/1807.03247.pdf)
- SDPoint: Implementation of “Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks”, published in CVPR 2018.
- SRDenseNet-pytorch: SRDenseNet-pytorch(ICCV_2017)
- GAN_stability: Code for paper “Which Training Methods for GANs do actually Converge? (ICML 2018)”
- Mask-RCNN: A PyTorch implementation of the architecture of Mask RCNN, serves as an introduction to working with PyTorch
- pytorch-coviar: Compressed Video Action Recognition
- PNASNet.pytorch: PyTorch implementation of PNASNet-5 on ImageNet.
- NALU-pytorch: Basic pytorch implementation of NAC/NALU from Neural Arithmetic Logic Units arxiv.org/pdf/1808.00508.pdf
- LOLA_DiCE: Pytorch implementation of LOLA (arxiv.org/abs/1709.04326) using DiCE (arxiv.org/abs/1802.05098)
- generative-query-network-pytorch: Generative Query Network (GQN) in PyTorch as described in “Neural Scene Representation and Rendering”
- pytorch_hmax: Implementation of the HMAX model of vision in PyTorch.
- FCN-pytorch-easiest: trying to be the most easiest and just get-to-use pytorch implementation of FCN (Fully Convolotional Networks)
- transducer: A Fast Sequence Transducer Implementation with PyTorch Bindings.
- AVO-pytorch: Implementation of Adversarial Variational Optimization in PyTorch.
- HCN-pytorch: A pytorch reimplementation of { Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation }.
- binary-wide-resnet: PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnel (ICLR 2018)
- piggyback: Code for Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights arxiv.org/abs/1801.06519
- vid2vid: Pytorch implementation of our method for high-resolution (e.g. 2048×1024) photorealistic video-to-video translation.
- poisson-convolution-sum: Implements an infinite sum of poisson-weighted convolutions
- tbd-nets: PyTorch implementation of “Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning” arxiv.org/abs/1803.05268
- attn2d: Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction
- yolov3: YOLOv3: Training and inference in PyTorch pjreddie.com/darknet/yolo
- deep-dream-in-pytorch: Pytorch implementation of the DeepDream computer vision algorithm.
- pytorch-flows: PyTorch implementations of algorithms for density estimation
- quantile-regression-dqn-pytorch: Quantile Regression DQN a Minimal Working Example
- relational-rnn-pytorch: An implementation of DeepMind’s Relational Recurrent Neural Networks in PyTorch.
- DEXTR-PyTorch: Deep Extreme Cut http://www.vision.ee.ethz.ch/~cvlsegmentation/dextr
- PyTorch_GBW_LM: PyTorch Language Model for Google Billion Word Dataset.
- Pytorch-NCE: The Noise Contrastive Estimation for softmax output written in Pytorch
- generative-models: Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN.
- convnet-aig: PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs.
- integrated-gradient-pytorch: This is the pytorch implementation of the paper – Axiomatic Attribution for Deep Networks.
- MalConv-Pytorch: Pytorch implementation of MalConv.
- trellisnet: Trellis Networks for Sequence Modeling
- Learning to Communicate with Deep Multi-Agent Reinforcement Learning: pytorch implementation of Learning to Communicate with Deep Multi-Agent Reinforcement Learning paper.
- pnn.pytorch: PyTorch implementation of CVPR’18 – Perturbative Neural Networks http://xujuefei.com/pnn.html.
- Face_Attention_Network: Pytorch implementation of face attention network as described in Face Attention Network: An Effective Face Detector for the Occluded Faces.
- waveglow: A Flow-based Generative Network for Speech Synthesis.
- deepfloat: This repository contains the SystemVerilog RTL, C++, HLS (Intel FPGA OpenCL to wrap RTL code) and Python needed to reproduce the numerical results in “Rethinking floating point for deep learning”
- EPSR: Pytorch implementation of Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network. This work has won the first place in PIRM2018-SR competition (region 1) held as part of the ECCV 2018.
- ClariNet: A Pytorch Implementation of ClariNet arxiv.org/abs/1807.07281
- pytorch-pretrained-BERT: PyTorch version of Google AI’s BERT model with script to load Google’s pre-trained models
- torch_waveglow: A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis.
- 3DDFA: The pytorch improved re-implementation of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.
- loss-landscape: loss-landscape Code for visualizing the loss landscape of neural nets.
- famos: Pytorch implementation of the paper “Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Image Stylization” available at http://arxiv.org/abs/1811.09236.
- back2future.pytorch: This is a Pytorch implementation of Janai, J., Güney, F., Ranjan, A., Black, M. and Geiger, A., Unsupervised Learning of Multi-Frame Optical Flow with Occlusions. ECCV 2018.
- FFTNet: Unofficial Implementation of FFTNet vocode paper.
- FaceBoxes.PyTorch: A PyTorch Implementation of FaceBoxes.
- Transformer-XL: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Contexthttps://github.com/kimiyoung/transformer-xl
- associative_compression_networks: Associative Compression Networks for Representation Learning.
- fluidnet_cxx: FluidNet re-written with ATen tensor lib.
- Deep-Reinforcement-Learning-Algorithms-with-PyTorch: This repository contains PyTorch implementations of deep reinforcement learning algorithms.
- Shufflenet-v2-Pytorch: This is a Pytorch implementation of faceplusplus’s ShuffleNet-v2.
- GraphWaveletNeuralNetwork: This is a Pytorch implementation of Graph Wavelet Neural Network. ICLR 2019.
- AttentionWalk: This is a Pytorch implementation of Watch Your Step: Learning Node Embeddings via Graph Attention. NIPS 2018.
- SGCN: This is a Pytorch implementation of Signed Graph Convolutional Network. ICDM 2018.
- SINE: This is a Pytorch implementation of SINE: Scalable Incomplete Network Embedding. ICDM 2018.
- GAM: This is a Pytorch implementation of Graph Classification using Structural Attention. KDD 2018.
- neural-style-pt: A PyTorch implementation of Justin Johnson’s Neural-style.
- TuckER: TuckER: Tensor Factorization for Knowledge Graph Completion.
- pytorch-prunes: Pruning neural networks: is it time to nip it in the bud?
- SimGNN: SimGNN: A Neural Network Approach to Fast Graph Similarity Computation.
- Character CNN: PyTorch implementation of the Character-level Convolutional Networks for Text Classification paper.
- XLM: PyTorch original implementation of Cross-lingual Language Model Pretraining.
- DiffAI: A provable defense against adversarial examples and library for building compatible PyTorch models.
- APPNP: Combining Neural Networks with Personalized PageRank for Classification on Graphs. ICLR 2019.
- NGCN: A Higher-Order Graph Convolutional Layer. NeurIPS 2018.
- gpt-2-Pytorch: Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation
- Splitter: Splitter: Learning Node Representations that Capture Multiple Social Contexts. (WWW 2019).
- CapsGNN: Capsule Graph Neural Network. (ICLR 2019).
- BigGAN-PyTorch: The author’s officially unofficial PyTorch BigGAN implementation.
- ppo_pytorch_cpp: This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch.
- RandWireNN: Implementation of: “Exploring Randomly Wired Neural Networks for Image Recognition”.
- Zero-shot Intent CapsNet: GPU-accelerated PyTorch implementation of “Zero-shot User Intent Detection via Capsule Neural Networks”.
- SEAL-CI Semi-Supervised Graph Classification: A Hierarchical Graph Perspective. (WWW 2019).
- MixHop: MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing. ICML 2019.
- densebody_pytorch: PyTorch implementation of CloudWalk’s recent paper DenseBody.
- voicefilter: Unofficial PyTorch implementation of Google AI’s VoiceFilter system http://swpark.me/voicefilter.
- NVIDIA/semantic-segmentation: A PyTorch Implementation of Improving Semantic Segmentation via Video Propagation and Label Relaxation, In CVPR2019.
- ClusterGCN: A PyTorch implementation of “Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks” (KDD 2019).
- NVlabs/DG-Net: A PyTorch implementation of “Joint Discriminative and Generative Learning for Person Re-identification” (CVPR19 Oral).
- NCRF: Cancer metastasis detection with neural conditional random field (NCRF)
- pytorch-sift: PyTorch implementation of SIFT descriptor.
- brain-segmentation-pytorch: U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI.
- glow-pytorch: PyTorch implementation of Glow, Generative Flow with Invertible 1×1 Convolutions (arxiv.org/abs/1807.03039)
- EfficientNets-PyTorch: A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.
- STEAL: STEAL – Learning Semantic Boundaries from Noisy Annotations nv-tlabs.github.io/STEAL
- EigenDamage-Pytorch: Official implementation of the ICML’19 paper “EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis”.
- Aspect-level-sentiment: Code and dataset for ACL2018 paper “Exploiting Document Knowledge for Aspect-level Sentiment Classification”
- breast_cancer_classifier: Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening arxiv.org/abs/1903.08297
- DGC-Net: A PyTorch implementation of “DGC-Net: Dense Geometric Correspondence Network”.
- universal-triggers: Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)
- Deep-Reinforcement-Learning-Algorithms-with-PyTorch: PyTorch implementations of deep reinforcement learning algorithms and environments.
- simple-effective-text-matching-pytorch: A pytorch implementation of the ACL2019 paper “Simple and Effective Text Matching with Richer Alignment Features”.
- Adaptive-segmentation-mask-attack (ASMA): A pytorch implementation of the MICCAI2019 paper “Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation”.
- NVIDIA/unsupervised-video-interpolation: A PyTorch Implementation of Unsupervised Video Interpolation Using Cycle Consistency, In ICCV 2019.
- Seg-Uncertainty: Unsupervised Scene Adaptation with Memory Regularization in vivo, In IJCAI 2020.
- pulse: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models
- distance-encoding: Distance-Encoding – Design Provably More PowerfulGNNs for Structural Representation Learning.
Talks & conferences
- PyTorch Conference 2018: First PyTorch developer conference at 2018.
Pytorch elsewhere
- the-incredible-pytorch: The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.
- generative models: Collection of generative models, e.g. GAN, VAE in Tensorflow, Keras, and Pytorch. http://wiseodd.github.io
- pytorch vs tensorflow: an informative thread on reddit.
- Pytorch discussion forum
- pytorch notebook: docker-stack: A project similar to Jupyter Notebook Scientific Python Stack
- drawlikebobross: Draw like Bob Ross using the power of Neural Networks (With PyTorch)!
- pytorch-tvmisc: Totally Versatile Miscellanea for Pytorch
- pytorch-a3c-mujoco: Implement A3C for Mujoco gym envs.
- PyTorch in 5 Minutes.
- pytorch_chatbot: A Marvelous ChatBot implemented using PyTorch.
- malmo-challenge: Malmo Collaborative AI Challenge – Team Pig Catcher
- sketchnet: A model that takes an image and generates Processing source code to regenerate that image
- Deep-Learning-Boot-Camp: A nonprofit community run, 5-day Deep Learning Bootcamp http://deep-ml.com.
- Amazon_Forest_Computer_Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks. kaggle competition.
- AlphaZero_Gomoku: An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)
- pytorch-cv: Repo for Object Detection, Segmentation & Pose Estimation.
- deep-person-reid: Pytorch implementation of deep person re-identification approaches.
- pytorch-template: PyTorch template project
- Deep Learning With Pytorch TextBook A practical guide to build neural network models in text and vision using PyTorch. Purchase on Amazon github code repo
- compare-tensorflow-pytorch: Compare outputs between layers written in Tensorflow and layers written in Pytorch.
- hasktorch: Tensors and neural networks in Haskell
- Deep Learning With Pytorch Deep Learning with PyTorch teaches you how to implement deep learning algorithms with Python and PyTorch.
- nimtorch: PyTorch – Python + Nim
- derplearning: Self Driving RC Car Code.
- pytorch-saltnet: Kaggle | 9th place single model solution for TGS Salt Identification Challenge.
- pytorch-scripts: A few Windows specific scripts for PyTorch.
- pytorch_misc: Code snippets created for the PyTorch discussion board.
- awesome-pytorch-scholarship: A list of awesome PyTorch scholarship articles, guides, blogs, courses and other resources.
- MentisOculi: A raytracer written in PyTorch (raynet?)
- DoodleMaster: “Don’t code your UI, Draw it !”
- ocaml-torch: OCaml bindings for PyTorch.
- extension-script: Example repository for custom C++/CUDA operators for TorchScript.
- pytorch-inference: PyTorch 1.0 inference in C++ on Windows10 platforms.
- pytorch-cpp-inference: Serving PyTorch 1.0 Models as a Web Server in C++.
- tch-rs: Rust bindings for PyTorch.
- TorchSharp: .NET bindings for the Pytorch engine
- ML Workspace: All-in-one web IDE for machine learning and data science. Combines Jupyter, VS Code, PyTorch, and many other tools/libraries into one Docker image.
- PyTorch Style Guide Style guide for PyTorch code. Consistent and good code style helps collaboration and prevents errors!
- PyTorch Conference 2018: First PyTorch developer conference at 2018.
Pytorch elsewhere
- the-incredible-pytorch: The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.
- generative models: Collection of generative models, e.g. GAN, VAE in Tensorflow, Keras, and Pytorch. http://wiseodd.github.io
- pytorch vs tensorflow: an informative thread on reddit.
- Pytorch discussion forum
- pytorch notebook: docker-stack: A project similar to Jupyter Notebook Scientific Python Stack
- drawlikebobross: Draw like Bob Ross using the power of Neural Networks (With PyTorch)!
- pytorch-tvmisc: Totally Versatile Miscellanea for Pytorch
- pytorch-a3c-mujoco: Implement A3C for Mujoco gym envs.
- PyTorch in 5 Minutes.
- pytorch_chatbot: A Marvelous ChatBot implemented using PyTorch.
- malmo-challenge: Malmo Collaborative AI Challenge – Team Pig Catcher
- sketchnet: A model that takes an image and generates Processing source code to regenerate that image
- Deep-Learning-Boot-Camp: A nonprofit community run, 5-day Deep Learning Bootcamp http://deep-ml.com.
- Amazon_Forest_Computer_Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks. kaggle competition.
- AlphaZero_Gomoku: An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)
- pytorch-cv: Repo for Object Detection, Segmentation & Pose Estimation.
- deep-person-reid: Pytorch implementation of deep person re-identification approaches.
- pytorch-template: PyTorch template project
- Deep Learning With Pytorch TextBook A practical guide to build neural network models in text and vision using PyTorch. Purchase on Amazon github code repo
- compare-tensorflow-pytorch: Compare outputs between layers written in Tensorflow and layers written in Pytorch.
- hasktorch: Tensors and neural networks in Haskell
- Deep Learning With Pytorch Deep Learning with PyTorch teaches you how to implement deep learning algorithms with Python and PyTorch.
- nimtorch: PyTorch – Python + Nim
- derplearning: Self Driving RC Car Code.
- pytorch-saltnet: Kaggle | 9th place single model solution for TGS Salt Identification Challenge.
- pytorch-scripts: A few Windows specific scripts for PyTorch.
- pytorch_misc: Code snippets created for the PyTorch discussion board.
- awesome-pytorch-scholarship: A list of awesome PyTorch scholarship articles, guides, blogs, courses and other resources.
- MentisOculi: A raytracer written in PyTorch (raynet?)
- DoodleMaster: “Don’t code your UI, Draw it !”
- ocaml-torch: OCaml bindings for PyTorch.
- extension-script: Example repository for custom C++/CUDA operators for TorchScript.
- pytorch-inference: PyTorch 1.0 inference in C++ on Windows10 platforms.
- pytorch-cpp-inference: Serving PyTorch 1.0 Models as a Web Server in C++.
- tch-rs: Rust bindings for PyTorch.
- TorchSharp: .NET bindings for the Pytorch engine
- ML Workspace: All-in-one web IDE for machine learning and data science. Combines Jupyter, VS Code, PyTorch, and many other tools/libraries into one Docker image.
- PyTorch Style Guide Style guide for PyTorch code. Consistent and good code style helps collaboration and prevents errors!
PyTorch – сверточные нейронные сети
Лекция: Сверточные нейронные сети
Parameters
• in_channels (int) – Number of channels in the input image
• out_channels (int) – Number of channels produced by the convolution
• kernel_size (int or tuple) – Size of the convolving kernel
• stride (int or tuple, optional) – Stride of the convolution. (Default: 1)
• padding (int or tuple, optional) – Zero-padding added to both sides of the input (Default: 0)
• padding_mode (string, optional) – zeros
• dilation (int or tuple, optional) – Spacing between kernel elements. (Default: 1)
• groups (int, optional) – Number of blocked connections from input to output channels. (Default: 1)
• bias (bool, optional) – If True, adds a learnable bias to the output. (Default: True)
And this URL has helpful visualization of the process.
So the in_channels in the beginning is 3 for images with 3 channels (colored images). For images black and white it should be 1. Some satellite images should have 4.
The out_channels is what convolution will produce so these are the number of filters.
Let’s create an example to “prove” that.
import torch
import torch.nn as nn
c = nn.Conv2d(1,3, stride = 1, kernel_size=(4,5))
print(c.weight.shape)
print(c.weight)
Out
torch.Size([3, 1, 4, 5])
Parameter containing:
tensor([[[[ 0.1571, 0.0723, 0.0900, 0.1573, 0.0537],
[-0.1213, 0.0579, 0.0009, -0.1750, 0.1616],
[-0.0427, 0.1968, 0.1861, -0.1787, -0.2035],
[-0.0796, 0.1741, -0.2231, 0.2020, -0.1762]]],
[ 0.1885, -0.0440, -0.1638, 0.1429, -0.0606],
[-0.1395, -0.1202, 0.0498, 0.0432, -0.1132],
[-0.2073, 0.1480, -0.1296, -0.1661, -0.0633]]],
[ 0.0968, -0.1157, 0.1012, 0.0863, -0.1844],
[-0.2080, -0.1355, -0.1842, -0.0017, -0.2123],
[-0.1495, -0.2196, 0.1811, 0.1672, -0.1817]]]], requires_grad=True)
If we would alter the number of out_channels,
c = nn.Conv2d(1,5, stride = 1, kernel_size=(4,5))
print(c.weight.shape) # torch.Size([5, 1, 4, 5])
We will get 5 filters each filter 4×5 as this is our kernel size. If we would set 2 channels, (some images may have 2 channels only)
c = nn.Conv2d(2,5, stride = 1, kernel_size=(4,5))
print(c.weight.shape) # torch.Size([5, 2, 4, 5])
our filter will have 2 channels.
I think they have terms from this book and since in there they haven’t called it filters, they haven’t used that term.
So you are right; filters are the what conv layer is learning and the number of filters is the number of out channels. They are set random at start.
Number of activations is calculated based on bs and image dimension:
bs=16
x = torch.randn(bs, 3, 28, 28)
c = nn.Conv2d(3,10,kernel_size=5,stride=1,padding=2)
out = c(x)
print(out.nelement()) #125440 number of activations
CLASStorch.nn.Conv2d(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T, T]], stride: Union[T, Tuple[T, T]] = 1, padding: Union[T, Tuple[T, T]] = 0, dilation: Union[T, Tuple[T, T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = ‘zeros’)[SOURCE]
Applies a 2D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N, C_{\text{in}}, H, W)(N,Cin,H,W) and output (N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}})(N,Cout,Hout,Wout) can be precisely described as:
\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{\text{in}} – 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)out(Ni,Coutj)=bias(Coutj)+k=0∑Cin−1weight(Coutj,k)⋆input(Ni,k)
where \star⋆ is the valid 2D cross-correlation operator, NN is a batch size, CC denotes a number of channels, HH is a height of input planes in pixels, and WW is width in pixels.
• stride controls the stride for the cross-correlation, a single number or a tuple.
• padding controls the amount of implicit zero-paddings on both sides for padding number of points for each dimension.
• dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
• groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,
o At groups=1, all inputs are convolved to all outputs.
o At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
o At groups= in_channels, each input channel is convolved with its own set of filters, of size: \left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor⌊in_channelsout_channels⌋ .
The parameters kernel_size, stride, padding, dilation can either be:
• a single int – in which case the same value is used for the height and width dimension
• a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
NOTE
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
NOTE
When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also termed in literature as depthwise convolution.
In other words, for an input of size (N, C_{in}, H_{in}, W_{in})(N,Cin,Hin,Win) , a depthwise convolution with a depthwise multiplier K, can be constructed by arguments (in\_channels=C_{in}, out\_channels=C_{in} \times K, …, groups=C_{in})(in_channels=Cin,out_channels=Cin×K,…,groups=Cin) .
NOTE
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. Please see the notes on Reproducibility for background.
Parameters
• in_channels (int) – Number of channels in the input image
• out_channels (int) – Number of channels produced by the convolution
• kernel_size (int or tuple) – Size of the convolving kernel
• stride (int or tuple, optional) – Stride of the convolution. Default: 1
• padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
• padding_mode (string, optional) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’
• dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
• groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
• bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Shape:
• Input: (N, C_{in}, H_{in}, W_{in})(N,Cin,Hin,Win)
• Output: (N, C_{out}, H_{out}, W_{out})(N,Cout,Hout,Wout) where
H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] – \text{dilation}[0] \times (\text{kernel\_size}[0] – 1) – 1}{\text{stride}[0]} + 1\right\rfloorHout=⌊stride[0]Hin+2×padding[0]−dilation[0]×(kernel_size[0]−1)−1+1⌋
W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] – \text{dilation}[1] \times (\text{kernel\_size}[1] – 1) – 1}{\text{stride}[1]} + 1\right\rfloorWout=⌊stride[1]Win+2×padding[1]−dilation[1]×(kernel_size[1]−1)−1+1⌋
Variables
• ~Conv2d.weight (Tensor) – the learnable weights of the module of shape (\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},(out_channels,groupsin_channels, \text{kernel\_size[0]}, \text{kernel\_size[1]})kernel_size[0],kernel_size[1]) . The values of these weights are sampled from \mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k,k) where k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}k=Cin∗∏i=01kernel_size[i]groups
• ~Conv2d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from \mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k,k) where k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}k=Cin∗∏i=01kernel_size[i]groups
Examples
>>> # With square kernels and equal stride
>>> m = nn.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = torch.randn(20, 16, 50, 100)
>>> output = m(input)
К концу 2020 года библиотеку машинного обучения PyTorch будут развивать больше разработчиков, чем TensorFlow. PyTorch создана Facebook, TensorFlow — Google, обе с открытым кодом. TensorFlow считается фактическим стандартом, она появилась раньше, чем PyTorch. Но по данным сайта OpenHub, за последний год у обоих проектов было примерно одинаковое количество активных разработчиков. При этом сообщество пользователей TensorFlow гораздо крупнее, чем у PyTorch, но в научно-исследовательской среде библиотека Facebook вырвалась вперед и сейчас используется гораздо шире. Преимуществом PyTorch называют то, что это нативная библиотека для Python, языка программирования, который сейчас наиболее широко применяется для задач машинного обучения, тогда как для использования TensorFlow с Python предоставляется специальный API. Кроме того, в PyTorch применяется динамическая модель работы с графами, упрощающая программирование, хотя в TensorFlow, начиная с версии 2.0, тоже появилась аналогичная особенность.
Deep Learning (with PyTorch)
Week 1 – Lecture: History, motivation, and evolution of Deep Learning
Week 1 – Practicum: Classification, linear algebra, and visualisation
Week 2 – Lecture: Stochastic gradient descent and backpropagation
Week 3 – Lecture: Convolutional neural networks