How to Deploy your Deep Learning Model Free on Heroku| FastAPI | Pytorch | FastAI | Heroku

Dhruv Padhiyar
3 min readSep 25, 2020
Deep Learning Model

In this tutorial, I will show how to deploy your Deep Learning Model with FastAPI, PyTorch, and FastAI on the Heroku platform for FREE. (No Credit Card Required.)

The Model will take input image of the car and display the damage done to the car in 3 categories — MINOR, MODERATE, and SEVERE.I have already deployed the code on Heroku. You can visit the website here .

Let's start with the Project structure first.

File structure

Let us start by creating a folder. Inside that folder will be our app.py file which will handle all GET and POST requests. Our Model for detecting damage.Requirements file for all the dependencies for the project with some Templates Files.

Requirements.txt file contains -

fastapi
jinja2
aiofiles
requests
uvicorn
fastai==1.0.61
torchvision==0.5.0
https://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp36-cp36m-linux_x86_64.whl
python-multipart

As you have seen we have used specific version of Pytorch library because of Heroku’s size limitation.(We only get 500MB)

Now, We install all the requirements for our project.

pip3 install -r requirements.txt

Let's start with the app.py file. We will import all the required package from fastapi library.

from fastapi import FastAPI, Request, File, UploadFile
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from fastapi.responses import HTMLResponse, FileResponse
from modelpredict import ModelPredict

Now, we initialize fastapi method and mount static files and initialize the templates directory.

app = FastAPI()app.mount("/static", StaticFiles(directory="static"), name="static")
templates = Jinja2Templates(directory="templates")

Now we will create starting point of our website. This will have an option to upload the image.

@app.get("/")
async def root(request: Request):
return templates.TemplateResponse("index.html", {
'request': request,
})

Next thing is to create an endpoint for POST data to our server. Server will process the data coming and will send the predicted value in return with filename.

@app.post("/predict/")
async def create_upload_files(request: Request,file: UploadFile = File(...)):
contents = await file.read()
filename = 'static/' + file.filename
with open(filename, 'wb') as f:
f.write(contents)
m = ModelPredict(filename).predict()
return templates.TemplateResponse("predict.html", {
"request": request,
"filename": file.filename,
"Predict": m
})

The Model Predict file will contain following code

from fastai.vision import *
from urllib.request import urlretrieve
class ModelPredict():
def __init__(self, filename):
self.filename = filename
def download_model(self):
if path.exists('export.pkl') == False:
url = 'https://drive.google.com/uc?id=1V30NUR4t0pZ0a76d8NMx5XQ7NSSbyR__&export=download'
filename = 'export.pkl'
urlretrieve(url,filename)
def predict(self):
self.download_model()
learn = load_learner('')
img = open_image(self.filename)
pred_class , pred_idx, outputs = learn.predict(img)
return str(pred_class)

The last but the important part is the Procfile which should contain the following line-

web: uvicorn app:app — host=0.0.0.0 — port=${PORT:-5000}

After that just push the code to heroku master and you are done!

You can checkout the full code on my github —

Thank You all. Take Care and Wear Mask!

--

--