No results found

Your search did not match any results.

We suggest you try the following to help find what you're looking for:

  • Check the spelling of your keyword search.
  • Use synonyms for the keyword you typed, for example, try "application" instead of "software."
  • Try one of the popular searches shown below.
  • Start a new search.
Trending Questions
 

What Is PyTorch: A Complete Guide

What do Disney, Tesla, and weed-killing robots have in common? They're all using PyTorch to realize their machine learning goals.

The fact that two of the largest corporations are using an open-source framework should be enough, on its own, to raise a few eyebrows. This is in part due to the PyTorch vs Tensorflow debate, with Tensorflow being the better-known of the two.

PyTorch has a lot more going for it than simply being open source, however. There are many reasons why Disney prefers the machine learning framework for their innovative facial recognition projects.

PyTorch defined

We're going to give you a guided tour of the machine learning framework developed by Facebook.

What Is PyTorch? It is an open-source library designed with Python in mind and built for machine learning projects. It specializes in automatic differentiation, tensor computations, and GPU acceleration. This makes it uniquely suited for cutting-edge machine learning applications like deep learning.

PyTorch is particularly popular among researchers due to the customizability of Python. Creating custom data layers and network architectures is especially easy using Python.

PyTorch Deep Learning

PyTorch is based on Torch, an early framework for deep learning. PyTorch just takes the deep learning potential of Torch and ports it into the Python environment.

Python has become one of the most popular programming languages for web-related applications, alongside other modern programming languages like R. Naturally, data scientists, programmers, and developers would want to integrate neural networks and deep learning in their Python projects.

The first attempt at integrating deep learning into Python was Keras. Developed in 2015, Keras exposed an API for training neural networks in Python. The Keras API was closely modeled after scikit-learn, which is one of the most popular frameworks for working with machine learning in Python.

TensorFlow was soon to follow. Also created in 2015, TensorFlow became the de facto backend for Keras. It also features some low-level functionality that is more difficult to implement using Keras.

TensorFlow's early API wasn't particularly well-suited for Python, however. Facebook set out to remedy this dilemma. Launched in September 2016, PyTorch was their solution to some of the problems researchers were having with Keras and TensorFlow.

PyTorch solved two problems with one blow. It remedied a few of the shortcomings found in Keras as well as providing a more intuitive API.

With that said, both Keras and TensorFlow have since fixed a lot of these early bugs. At this point, the answer in the TensorFlow vs PyTorch debate is there isn't a clear victor. Both perform similar functions but with different syntax.

There are some differences, however. Let's take a moment with the PyTorch vs TensorFlow conversation before we go any further.

PyTorch Vs TensorFlow

There were far more differences in PyTorch vs TensorFlow when they were first released. Many of these inconsistencies have since been ironed out. There are still some disparities, however, which are worth looking at:

API
The limitations of TensorFlow's API was the first thing that prompted the creation of PyTorch in the first place. TensorFlow's API has since been updated quite a bit, but PyTorch was created specifically to port a machine learning library into the Python environment.

Computation Graph
Computation graphs are some of the more significant differences between PyTorch and TensorFlow.

TensorFlow uses static computation graphs to allocate resources. It creates a graph for the series of calculations you want to perform. When resources are being allocated, it uses placeholder data to make its calculations.

The data is then plugged in, after the fact.

PyTorch uses dynamic computation graphs, on the other hand. This means that calculations are performed after each line of code is complete.

Static computation graphs are easier on processors. They're a pain to debug, however, which makes dynamic computation preferable for a lot of applications.

Distributed Computing
In its earliest days, running TensorFlow across multiple devices or platforms was prohibitively difficult. You would have to fine-tune TensorFlow by hand for it to run smoothly in decentralized applications.

PyTorch doesn't have the same limitations. As with many of the other issues we've been discussing, TensorFlow has solved a lot of these issues in the ensuing years. For this particular issue, TensorFlow created Tensor Programming Units (TPU).

TPUs are even faster than GPUs and are now widely used and available. PyTorch isn't as adept at handling TPUs, but this can be addressed using 3rd party plugins like XLA.

Getting Started With PyTorch

Now that we've answered the "What is PyTorch?" question, we're going to show you how to get started with a brief tutorial. This way, you can see this language framework in action and see how it fits into your workflow. For the sake of this PyTorch tutorial, we'll be using Python and a barebones command prompt on a PC. Feel free to adjust these instructions for Python environments like Anaconda if that's what you're more comfortable with.

Install PyTorch

To start, navigate to your programming folder using the command prompt. Type in the command md PyTorch to create a directory for this tutorial. Then navigate into the new folder.

Now we're going to install the PyTorch library using Python's Pip command.

$pip install torch torchvision

Now you'll be able to import torch and torchvision into your Python programs.

Understanding Tensors

PyTorch represents data in multi-dimensional arrays known as tensors. This is similar to how data is handled in other popular Python frameworks like NumPy.

Here's an example of creating a tensor using NumPy:



$import numpy as np
$np.array([[0.0, 1.3], [2.8, 3.3], [4.1, 5.2], [6.9, 7.0]])
array([[0. , 1.3],
[2.8, 3.3],
[4.1, 5.2],
[6.9, 7. ]])

Here's an example of the same thing implemented using PyTorch:



$import torch

$torch.tensor([[0.0, 1.3], [2.8, 3.3], [4.1, 5.2], [6.9, 7.0]])
tensor([[0.0000, 1.3000],
[2.8000, 3.3000],
[4.1000, 5.2000],
[6.9000, 7.0000]])

These might look nearly identical. The difference is that PyTorch can automatically create graphs and use differentiation automatically.

Working With Data in PyTorch

PyTorch features two functions for working with data. They are torch.utils.data.DataLoader and torch.utils.data.Dataset. Dataset stores the variable into a tensor and DataLoader wraps an iterable around the dataset.

PyTorch also has libraries for specific applications. These are TorchText, TorchVision, and TorchAudio. Each of these has its own datasets.

We'll use a TorchVision dataset for this tutorial. Every TorchVision has two variables: transform and target_transform. Transform modifies the samples, while target_transform works on labels.



# Download training data from open datasets.
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)

# Download test data from open datasets.
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)

Creating Models in PyTorch

Creating a neural network in PyTorch is easy. You just need to create a class that inherits input from nn.module. Then you define the layers of the network in the __init__ function. Then you specify how data moves through the network using the forward function.



# Define model
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10)
)

def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits

model = NeuralNetwork().to(device)
print(model)

Optimize Pytorch Models

Training neural networks in PyTorch is also simple. You just need to create a loss function and an optimizer.



loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)

This way, the model makes predictions based on the input its given. This probability is then fed into the current data to optimize the model's performance.



def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)

# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)

# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()

if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")


We'll also want to test the dataset to make sure it's performing properly.



def test(dataloader, model, loss_fn):
     size = len(dataloader.dataset)
     num_batches = len(dataloader)
     model.eval()
     test_loss, correct = 0, 0
     with torch.no_grad():
       for X, y in dataloader:
           X, y = X.to(device), y.to(device)
          pred = model(X)
          test_loss += loss_fn(pred, y).item()
          correct += (pred.argmax(1) == y).type(torch.float).sum().item()
     test_loss /= num_batches
     correct /= size
     print(f"Test Error: Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} ")

This setup optimizes PyTorch with each passing iteration, which are known as epochs.

Saving and Loading Models

A common method for saving modes in PyTorch is to save the internal state dictionary.



torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")

Loading models is just as simple. You just have to re-create the model structure and load the state dictionary into it.



model = NeuralNetwork()

model.load_state_dict(torch.load("model.pth"))

This model can now be used to make predictions. 

classes = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot",
]

model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
pred = model(x)
predicted, actual = classes[pred[0].argmax(0)], classes[y]
print(f'Predicted: "{predicted}", Actual: "{actual}"')

This model can now be used to make predictions.

If you'd like to see the code for this tutorial, you can check out the Jupyter Notebook or view the project on GitHub.

That's all there is to it! It's an exciting time to be a developer, programmer, or business owner looking to take advantage of the powerful tech at your disposal.

After completing this tutorial, you should have a sense of how easy it is to get up and running with just a few commands!

Pytorch—Ready For the Cloud?

The world is becoming more decentralized with each passing year. Cloud computing and decentralized technology empower you and your customers to flourish and thrive in this new paradigm.

Cloud computing platforms provide powerful infrastructure for training and deploying machine learning models. Learn how to serve PyTorch Models from Oracle Cloud Infrastructure (OCI) for free - step by step guide.