The post How to Create a Webhook Listener? appeared first on QueWorx.
]]>For example, imagine a scenario where a customer completes a purchase on your ecommerce platform. A webhook could be triggered to send purchase details to your script, that will then use this information to fetch additional customer data from a customer data platform and update the customer’s rewards points in a third-party app like Yotpo.
A webhook listener is the code that listens for these incoming webhooks, allowing your applications to take immediate action based on that information. Let’s take a look at the different ways we can set up a webhook listener.
There are two main approaches to setting up a webhook listener: using a web server or opting for serverless computing.
A web server runs continuously, listening for incoming webhook events. The simplest way to set up a web server is to lease a virtual machine from a cloud hosting provider such as AWS or DigitalOcean. You would then choose an operating system, usually a reliable Linux distribution like Ubuntu or CentOS. Once your VM is up, you would install web server software such as Nginx or Apache to handle HTTP requests, configure SSL to secure your server with SSL/TLS, and set up your application to receive and process the webhooks.
The primary advantage of using a web server is the complete control it offers over both your application and the server environment. You can customize configurations to suit your specific needs, which is not always possible with serverless, as it often imposes limitations on software, hardware, and execution duration.
Advantages:
Disadvantages:
On the other hand, serverless computing allows you to execute code without the complexities of server management. The setup process typically involves just writing your webhook handling code and deploying it to a serverless platform.
Deploying serverless code is notably simpler and quicker. In many instances, you simply write the code, deploy it, and all other aspects—such as server management, SSL certificate setup and renewal, and system maintenance—are handled automatically.
Serverless computing can also be more cost-effective, particularly for applications with infrequent requests. You only pay for the compute resources you actually use. Conversely, a traditional web server continuously runs and consumes resources regardless of demand.
Finally, serverless platforms manage scaling automatically, seamlessly adding servers and balancing loads as needed – something that would take a massive amount of work in a traditional server setup.
Advantages:
Disadvantages:
Ultimately, the decision between using a web server and going for serverless computing depends on your application’s specific requirements and operational needs. For straightforward tasks such as webhook listening, serverless is often the ideal choice due to its simplicity. We will explore how to set up a serverless webhook listener in the next section.
codeupify.com is a serverless platform that makes it pretty simple to get a webhook listener up and running. Here are the steps to set up a sample Python webhook listener:
import requests
def handler(request):
# Parse the incoming request
# Send a request to a third party service
# ...
The URL you receive is the endpoint for your webhook listener. Simply set this URL as the target in whichever service is sending webhooks.
That’s it! Once your webhook is set up, make sure to thoroughly test it across various scenarios to ensure it handles all expected events accurately. Try to simulate different conditions and payloads to verify that your listener responds correctly. Additionally, ongoing monitoring of your webhook is essential to quickly identify and resolve any errors or performance issues that may arise over time.
The post How to Create a Webhook Listener? appeared first on QueWorx.
]]>Django REST Framework – A Complete Guide Read More »
The post Django REST Framework – A Complete Guide appeared first on QueWorx.
]]>Django REST framework is the de facto library for building REST APIs in Django. It’s been around since 2011, and in that time has been used in thousands of projects with over a thousand contributors. It’s currently used by many large companies, including Robindhood, Mozilla, Red Hat, and Eventbrite.
If you are using Django as your web framework and you need to write a REST API, the Django REST framework is the default choice to get the job done. It’s by far the most popular Django library for writing REST APIs, it’s well maintained and supported. It also comes with many features out of the box to simplify API development:
If you are using Django and REST APIs, it’s a no-brainer, you should use Django REST framework. But over the past few years another API type started gaining a lot of traction – GraphQL. If you are going to be writing a GraphQL API, it doesn’t make sense to use the Django REST framework, you should take a look at Graphene Django instead.
Django REST framework has pretty much come to dominate Django REST API, but here are some other alternatives:
It looks like another complete Django REST API library. People that have used it seem to say lots of positive things about it. Unfortunately, the project stopped being maintained, and is not under active development anymore.
From the creator of Django Tastypie, this is a small, flexible REST API library. Where Django REST framework has evolved to be a big library that can accommodate pretty much everyone, Django Restless just tries to do a few things really well, without adding any bloat. If you like to tinker more and want something really fast and flexible, this might be the library for you.
To start using the Django REST Framework you need to install the djangorestframework
package:
pip install djangorestframework
Add rest_framework
to INSTALLED_APPS settings
INSTALLED_APPS = [
...
'rest_framework',
]
That should be enough to get you started
Django REST framework makes it very easy to create a basic API that works with Django models. With a few lines of code we can create an API that can list our objects and do basic CRUD. Let’s take a look at an example with some basic models.
models.py
class Author(models.Model):
name = models.CharField(max_length=255)
class Book(models.Model):
author = models.ForeignKey(Author, on_delete=models.CASCADE)
title = models.CharField(max_length=255)
Serializers allow the conversion of querysets and model instances into data types that can be rendered as Content Type (JSON, XML, etc) and the other way around.
serializers.py
class BookSerializer(serializers.ModelSerializer):
class Meta:
model = Book
fields = ['author', 'title', 'num_pages']
views.py
from rest_framework import viewsets
from book.models import Book
from book.serializers import BookSerializer
class BookViewset(viewsets.ModelViewSet):
queryset = Book.objects.all()
serializer_class = BookSerializer
And we let the REST framework wire up the url routes based on common conventions.
urls.py
router = routers.DefaultRouter()
router.register(r'', views.BookViewset)
urlpatterns = [
path('', include(router.urls)),
]
Going to http://127.0.0.1:8000/book
gives us:
Here we can see a list of books with a GET request and can create a new book with a POST request. The Browsable API gives us a nice human browsable display and forms that we can play around with.
If we go to http://127.0.0.1:8000/book/1/
, we see that a GET request to this url will give us details about the book with ID 1
. A PUT request will modify that book’s data. And a DELETE request will delete the book with ID `1`.
Since we are requesting Content-Type text/html
we are receiving the Browsable API, human friendly template. If we were to ask for Content-Type application/json
we would just be getting the JSON. You can also set the format explicitly in your browser like so:
http://127.0.0.1:8000/book/1/?format=json
Response:
{"author":2,"title":"To Kill a Mockingbird","num_pages":281}
As you can see Django REST framework makes it very easy for us to create a basic model CRUD API.
As we saw in the basic example, Django REST framework makes model CRUD really simple. How do we go about writing some custom API calls? Let’s say we wanted to search the books from the basic example by author and title.
Here’s a basic Django view method for searching books:
views.py
@api_view(['GET'])
def book_search(request):
author = request.query_params.get('author', None)
title = request.query_params.get('title', None)
queryset = Book.objects.all()
if author:
queryset = queryset.filter(author__name__contains=author)
if title:
queryset = queryset.filter(title__contains=title)
serializer = BookSerializer(queryset, many=True)
return Response(serializer.data)
urls.py
urlpatterns = [
path('book-search', views.book_search, name='book_search'),
]
The code overall looks pretty similar to the standard Django view, with just a few modifications. It’s wrapped in the api_view
decorator. This decorator passes a REST framework Request
object and modifies the context of the returned REST framework Response
object. We are using request.query_params
instead of request.GET
, and would use request.data
instead of request.POST
. And finally it uses a serializer to return a response, which will return the right content type to the client.
If we wanted to use class based views to facilitate code reuse we could modify the above code like so:
views.py
class BookSearch(APIView):
def get(self, request, format=None):
author = self.request.query_params.get('author', None)
title = self.request.query_params.get('title', None)
queryset = Book.objects.all()
if author:
queryset = queryset.filter(author__name__contains=author)
if title:
queryset = queryset.filter(title__contains=title)
serializer = BookSerializer(queryset, many=True)
return Response(serializer.data)
urls.py
urlpatterns = [
path('book-search-view', views.BookSearch.as_view()),
]
Of course the REST framework has a bunch of reusable view classes and mixins you can use. For example, for the above example you might want to use ListAPIView
. If you wanted to customize the Book CRUD code, instead of using the ViewSet
from the basic example, you might want to combine a variation of ListModelMixin
, CreateModelMixin
, RetrieveModelMixin
, UpdateModelMixin
, and DestroyModelMixin
.
GenericAPIView
is a common view that adds some common functionality and behavior to the base REST framework APIView
class. With this class you can override some attributes to get the desired behavior:
queryset
or override get_queryset()
to specify the objects that should come back from the viewserializer_class
or override get_serializer_class()
to get the serializer class to use for the objectpagination_class
to specify how pagination will be usedfilter_backends
– backends to use to filter the request, we go over backend filtering belowHere we use ListAPIView
(which extends GenericAPIView
and ListModelMixin
) to create our book search:
views.py
class BookSearch(ListAPIView):
serializer_class = BookSerializer
def get_queryset(self):
author = self.request.query_params.get('author', None)
title = self.request.query_params.get('title', None)
queryset = Book.objects.all()
if author:
queryset = queryset.filter(author__name__contains=author)
if title:
queryset = queryset.filter(title__contains=title)
return queryset
Let’s say that we had to write an API where we had to have someone update the status of a book’s read state:
model.py
class UserBook(models.Model):
STATUS_UNREAD = 'u'
STATUS_READ = 'r'
STATUS_CHOICES = [
(STATUS_UNREAD, 'unread'),
(STATUS_READ, 'read'),
]
book = models.ForeignKey(Book, on_delete=models.CASCADE)
user = models.ForeignKey(get_user_model(), on_delete=models.CASCADE)
state = models.CharField(max_length=1, choices=STATUS_CHOICES, default=STATUS_UNREAD)
serializers.py
class UserBookSerializer(serializers.ModelSerializer):
class Meta:
model = UserBook
fields = ['book', 'user', 'status']
We want to limit the field that they can change to just the status
. Ideally, we would validate that the user has permission to change this specific book, but we’ll get to that in the authentication/authorization section.
views.py
class BookStatusUpdate(UpdateAPIView):
queryset = UserBook.objects.all()
serializer_class = UserBookSerializer
permission_classes = (permissions.IsAuthenticated,)
def update(self, request, *args, **kwargs):
instance = self.get_object()
data = {'status': request.data.get('status')}
serializer = self.get_serializer(instance, data, partial=True)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
return Response(serializer.data)
So far we used very simple, automatic serialization by just listing the fields. REST Framework serializers are similar to Django Forms and give us a lot of control by specifying attributes and overriding various methods.
For our BookSerializers we could have listed out the fields with type, requirement, max_length, etc.
serializers.py
class BookSerializer(serializers.ModelSerializer):
title = serializers.CharField(required=True, max_length=100)
num_pages = serializers.IntegerField(read_only=True)
class Meta:
model = Book
fields = ['author', 'title', 'num_pages']
We could also override create() and update() methods to be able to execute some custom functionality:
serializers.py
class BookSerializer(serializers.ModelSerializer):
title = serializers.CharField(required=True, allow_blank=True, max_length=100)
num_pages = serializers.IntegerField(read_only=True)
def create(self, validated_data):
# Custom code
return Book.objects.create(**validated_data)
def update(self, instance, validated_data):
# Custom Code
instance.title = validated_data.get('title', instance.title)
instance.code = validated_data.get('num_pages', instance.code)
instance.save()
return instance
class Meta:
model = Book
fields = ['author', 'title', 'num_pages']
Working with serializers is very similar to how we work with Django forms, we validate the serializer and then call save(), saving the instance. Here’s a serializer that validates the title of the book.
serializers.py
class BookSerializer(serializers.ModelSerializer):
title = serializers.CharField(max_length=100)
def validate_title(self, value):
if len(value) < 4:
raise serializers.ValidationError("Title is too short")
return value
class Meta:
model = Book
fields = ['author', 'title', 'num_pages']
views.py
serializer = BookSerializer(data=data)
if serializer.is_valid():
serializer.save()
You can reference other entities in various ways:
PrimaryKeyRelatedField
HyperlinkedRelatedField
StringRelatedField
SlugRelatedField
For example, here’s how an Author on our Book might look if we were to just use PrimaryKeyRelatedField
{
"author": 2,
"title": "To Kill a Mockingbird",
"num_pages": 281
}
Serializers can be nested, this way we can work on multiple objects in one operation, like getting all the information about the Book as well as the Author in a GET request:
serializers.py
class AuthorSerializer(serializers.ModelSerializer):
name = serializers.CharField(max_length=255)
class Meta:
model = Author
fields = ['name']
class BookSerializer(serializers.ModelSerializer):
title = serializers.CharField(max_length=255)
author = AuthorSerializer()
class Meta:
model = Book
fields = ['author', 'title', 'num_pages']
http://127.0.0.1:8000/book/1/
returns
{
"author": {
"name": "Harper Lee"
},
"title": "To Kill a Mockingbird",
"num_pages": 281
}
To be able to create and update a nested relationship in one request you will need to modify create() and update(), they will not work with nested fields out of the box. The reason for this is that the relationship between models is complicated and based on specific application requirements. It’s not something that can be set up automatically; your logic will have to deal with deletions, None objects, and so on.
Here’s how you might handle create() in our simple example:
serializers.py
class BookSerializer(serializers.ModelSerializer):
title = serializers.CharField(max_length=255)
author = AuthorSerializer()
def create(self, validated_data):
author_data = validated_data.pop('author')
author = Author.objects.create(**author_data)
book = Book.objects.create(author=author, **validated_data)
return book
class Meta:
model = Book
fields = ['author', 'title', 'num_pages']
Doing a POST to http://127.0.0.1:8000/book with
{
"author": {
"name": "John1"
},
"title": "Book by John1",
"num_pages": 10
}
Will now create both an author and a book.
Here’s how we might handle a simple update()
serializers.py
def update(self, instance, validated_data):
author_data = validated_data.pop('author')
author = instance.author
instance.title = validated_data.get('title', instance.title)
instance.num_pages = validated_data.get('num_pages', instance.num_pages)
instance.save()
author.name = author_data.get('name', author.name)
author.save()
return instance
A PATCH or PUT call to http://127.0.0.1:8000/book/8/ (that’s the id of this particular book), with
{
"author": {
"name": "John1_mod"
},
"title": "Book by John1_mod",
"num_pages": 20
}
Will modify our book with the new author
, title
, and num_pages
.
Validation in the REST framework is done on the serializer. Just like with Django forms you can set some basic validation on the fields themselves. In our example above, we had:
serializers.py
class AuthorSerializer(serializers.ModelSerializer):
class Meta:
model = Author
fields = ['name', 'email']
We can add individual fields with various requirements to enforce various rules:
serializers.py
class AuthorSerializer(serializers.ModelSerializer):
name = serializers.CharField(max_length=255, required=True)
email = serializers.EmailField(read_only=True,
validators=[UniqueValidator(queryset=Author.objects.all())])
class Meta:
model = Author
fields = ['name', 'email']
Now name
is required field, email is read only and unique.
Just like with forms, before saving a serializer, you should call is_valid()
on it. If there are validation errors they will show up in serializer.errors
as a dictionary.
serializer.errors
# {'email': ['Enter a valid e-mail address.'], 'created': ['This field is required.']}
When writing your serializer, you can do field level and object level validation. Field-level validation can be done like this:
class AuthorSerializer(serializers.ModelSerializer):
def validate_email(self, value):
if value.find('@mail.com') >= 0:
raise serializers.ValidationError("The author can't have a mail.com address")
return value
class Meta:
model = Author
fields = ['name', 'email']
Object level validation can be done like this:
class AuthorSerializer(serializers.ModelSerializer):
def validate(self, data):
if data['email'].find(data['name']) >= 0:
raise serializers.ValidationError("The author's email can't contain his name")
return data
class Meta:
model = Author
fields = ['name', 'email']
The default authentication scheme can be set globally with DEFAULT_AUTHENTICATION_CLASSES
setting:
settings.py
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.BasicAuthentication',
'rest_framework.authentication.SessionAuthentication',
]
}
Or on a per view basis with authentication_classes
:
views.py
class BookSearch(ListAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
authentication_classes = [SessionAuthentication, BasicAuthentication]
permission_classes = [IsAuthenticated]
@api_view(['GET'])
@authentication_classes([SessionAuthentication, BasicAuthentication])
@permission_classes([IsAuthenticated])
def book_search(request):
pass
There are four types of authentication schemes:
BasicAuthentication
: Where the client sends the username and password in the request, not really suitable for production environmentsTokenAuthentication
: When the client authenticates, receives a token, and that token is then used to authenticate the client. This is good for separate clients and serversSessionAuthentication
: This is the standard django authentication scheme, where there is a server side session and the client passes the session id to the serverRemoteUserAuthentication
: This scheme has the web server deal with authenticationFor APIs, especially where the client is a separate application from the server, token authentication is the most interesting. To do token authentication with Django REST framework, you have to add rest_framework.authtoken
to your INSTALLED_APPS.
settings.py
INSTALLED_APPS = [
...
'rest_framework.authtoken'
]
Run migrations after adding this app.
In your application you will have to create a token for the user after they authenticate with a username and password, you can do it with this call:
views.py
token = Token.objects.create(user=...)
And then pass that token back to the client. The client will then include that Token in the HTTP headers like so:
Authorization: Token 9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b
REST framework already has a built-in view for obtaining an auth token, obtain_auth_token
. If the defaults work for you, you can wire this view in urls, and don’t have to write any of your own logic.
urls.py
from rest_framework.authtoken import views
urlpatterns += [
path('api-token-auth/', views.obtain_auth_token)
]
For authorization you can also set global and view level policies. For global you would set it in settings.py
:
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated', # Allow only authenticated requests
# 'rest_framework.permissions.AllowAny', # Allow anyone
]
}
And for views, you would use permission_classes
:
views.py
class BookSearch(ListAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
authentication_classes = [SessionAuthentication, BasicAuthentication]
permission_classes = [IsAuthenticated]
@api_view(['GET'])
@authentication_classes([SessionAuthentication, BasicAuthentication])
@permission_classes([IsAuthenticated])
def book_search(request):
Pass
You can have a view that’s authenticated or read only like this:
permission_classes = [IsAuthenticated|ReadOnly]
For a full list of permissions take a look at the API Reference
You can also create custom permissions by extending permissions.BasePermission
:
class CustomPermission(permissions.BasePermission):
def has_permission(self, request, view):
ip_addr = request.META['REMOTE_ADDR']
blocked = Blocklist.objects.filter(ip_addr=ip_addr).exists()
return not blocked
And then include it in your permission_classes
:
views.py
class BookSearch(ListAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
authentication_classes = [SessionAuthentication, BasicAuthentication]
permission_classes = [CustomPermission]
And finally, Django REST framework supports object level permissioning by calling check_object_permissions
, it will then determine if the user has permissions on the Model itself.
Most of the time you want to filter the queryset that comes back. If you are using GenericAPIView
, the simplest way to do that is to override get_queryset()
. One common requirement is to filter out the queryset by the current user, here is how you would do that:
views.py
class UserBookList(ListAPIView):
serializer_class = UserBookSerializer
def get_queryset(self):
user = self.request.user
return UserBook.objects.filter(user=user())
Our BookSearch above, actually used query parameters (query_param
) to do the filtration by overriding get_queryset()
.
Django REST framework also lets you configure a generic filtering system that will use fields on the models to determine what to filter.
To get that set up, you need to first install django-filter
pip install django-filter
Then add django_filters
to INSTALLED_APPS
INSTALLED_APPS = [
...
'django_filters',
...
Then you can either add backend filters globally in your settings.py file
REST_FRAMEWORK = {
'DEFAULT_FILTER_BACKENDS': ['django_filters.rest_framework.DjangoFilterBackend']
}
Or add it to individual class views
views.py
from django_filters.rest_framework import DjangoFilterBackend
class BookSearch(generics.ListAPIView):
...
filter_backends = [DjangoFilterBackend]
Let’s modify our BookSearch example above with Django Backend Filtering. What we had above:
views.py
class BookSearch(APIView):
def get(self, request, format=None):
author = self.request.query_params.get('author', None)
title = self.request.query_params.get('title', None)
queryset = Book.objects.all()
if author:
queryset = queryset.filter(author__name__contains=author)
if title:
queryset = queryset.filter(title__contains=title)
serializer = BookSerializer(queryset, many=True)
return Response(serializer.data)
Let’s modify it to use Backend Filtering:
class BookSearch(ListAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
filter_backends = [DjangoFilterBackend]
filterset_fields = ['author__name', 'title']
This gets us exact matches though, which isn’t exactly the same functionality. We can change it to SearchFilter
filter to get us the same functionality as above:
class BookSearch(ListAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
filter_backends = [SearchFilter]
filterset_fields = ['author__name', 'title']
Now we just call it with
http://127.0.0.1:8000/book/book-search-view?search=harper
And get back all the books that have “harper” in the title or author’s name.
We can also order against specific fields like so:
views.py
class BookSearch(ListAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
filter_backends = [OrderingFilter]
ordering_fields = ['title', 'author__name']
Letting someone order with a query like this:
http://127.0.0.1:8000/book/book-search-view?ordering=-title
Note, that if you don’t specify ordering_fields or set it to ‘__all__’ it will potentially expose fields that you don’t want someone to filter by, like passwords.
Pagination can be set globally and per view level. To set it globally add it to the settings file:
settings.py
REST_FRAMEWORK = {
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination',
'PAGE_SIZE': 25
To set it on a view you can use the pagination_class
attribute. You can create a custom pagination class by extending PageNumberPagination
:
class StandardResultsSetPagination(PageNumberPagination):
page_size = 100
page_size_query_param = 'page_size'
max_page_size = 1000
Caching is done by Django with method_decorator
, cache_page
and vary_on_cookie
:
# Cache requested url for 2 hours
@method_decorator(cache_page(60*60*2), name='dispatch')
class BookSearch(ListAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
authentication_classes = [SessionAuthentication, BasicAuthentication]
permission_classes = [CustomPermission]
vary_on_cookie
can be used to cache the request for a user
You can throttle (control the rate of requests) your API. To do it at a global state add these settings:
REST_FRAMEWORK = {
'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.AnonRateThrottle',
'rest_framework.throttling.UserRateThrottle'
],
'DEFAULT_THROTTLE_RATES': {
'anon': '100/day',
'user': '1000/day'
}
}
Or set it at the view level with throttle_classes
, for example:
views.py
class BookSearch(ListAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
authentication_classes = [SessionAuthentication, BasicAuthentication]
throttle_classes = [UserRateThrottle]
For throttling, clients are by default identified with the X-Forwarded-For
HTTP header, and if not present then by the REMOTE_ADDR
HTTP header.
By default versioning is not enabled. You can set up versioning by adding this to your settings file:
REST_FRAMEWORK = {
'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.NamespaceVersioning'
}
If DEFAULT_VERSIONING_CLASS
is None, which is the default, then request.version
will be None.
It’s possible to set versioning on a specific view with versioning_class
, but usually versioning is set globally.
You can control versioning with the following settings:
DEFAULT_VERSION
: sets the version when no version is provided, defaults to None. default_version
attribute on the view.ALLOWED_VERSIONS
: specifies the set of versions that are allowed, if not in the set it will raise an error. allowed_versions
attribute on the view.VERSION_PARAM
: The parameter to use for versioning, defaults to version
. version_param
attribute on the view.You have a few options for versioning classes:
AcceptHeaderVersioning
: Version is passed in the Accept
headerURLPathVersioning
: Version is passed as part of the url structureNamespaceVersioning
: Similar to URLPathVersioning
but uses url namespacing in Django. Take a look at how it differs from URLPathVersioning
hereHostNameVersioning
: Uses the hostname url to determine the versionQueryParameterVersioning
: Uses a query parameter to determine the versionYou can also create your own custom versioning scheme
How you deal with different versions in your code is up to you. One possible example is to just use different serializers:
def get_serializer_class(self):
if self.request.version == 'v1':
return BookSerializerV1
return BookSerializerV2
To generate documentation for your API you will have to generate an OpenAPI Schema. You will install pyyaml
and uritemplate
packages.
pip install pyyaml uritemplate
You can dynamically generate a schema with get_schema_view()
, like so:
urlpatterns = [
# ...
# Use the `get_schema_view()` helper to add a `SchemaView` to project URLs.
# * `title` and `description` parameters are passed to `SchemaGenerator`.
# * Provide view name for use with `reverse()`.
path('openapi', get_schema_view(
title="Your Project",
description="API for all things …",
version="1.0.0"
), name='openapi-schema'),
# ...
Going to http://127.0.0.1:8000/openapi
should show you the full OpenAPI schema of your API.
You can customize how your schema is generated, to learn how to do that, check out the official documentation
You can set descriptions on your views that will then be shown in both the browsable API and in the generated schema. The description uses markdown. For example:
@api_view(['GET'])
def book_search(request):
"""
The book search
"""
…
For viewset and view based classes you have to describe the methods and actions:
class BookViewset(viewsets.ModelViewSet):
"""
retrieve:
Return the given book
create:
Create a new book.
"""
queryset = Book.objects.all()
serializer_class = BookSerializer
As we saw above, Django REST Framework is an incredibly complex and all encompassing REST API framework. I tried to distill some of the main concepts into this guide, so that you can start working with the framework. The reality is that there is still a lot that I wasn’t able to cover in this guide as far as the types of customizations you can make, the options you can set, and the various classes for every type of scenario. If you ever get stuck, you can always reference the API Guide at https://www.django-rest-framework.org/.
The post Django REST Framework – A Complete Guide appeared first on QueWorx.
]]>Software Development Basics For Non-tech Founders Read More »
The post Software Development Basics For Non-tech Founders appeared first on QueWorx.
]]>An MVP (Minimum Viable Product) is essentially the least amount of effort software you can build that lets you start engaging your users and learning something valuable. It’s a way of testing assumptions and getting some validated learning without spending a ton of effort doing the wrong things.
Originally, Eric Ries came up with that term in his Lean Startup methodology. While building products at his startups, Eric noticed that he was spending a massive amount of time building the wrong thing, only to then go back to the drawing board and throw out most of it. My experience has been very similar, both in the products that I’ve built and at the startups that I worked at. In fact, I would say that overbuilding in a vacuum is the most common mistake that I have witnessed at startups. Research seems to agree, CB Insights did a postmortem on 100 startups, and the number one reason that startups fail is that they fail to fill a market need. These startups spent months or years building products that no one actually needed.
To understand why that problem exists we have to fundamentally understand what a startup is and why it exists. The entire point of a startup is to discover a sustainable and scalable business model. It’s mainly a learning experience. That’s different to say a small business that sells burgers, where the business model has already been discovered. And so what happens is that the founders have some kind of an idea and a vision for their product. They go off and build it for many months without any input from users (or incorrect input from users). They get the software just perfect: fully featured, polished, maintainable, and scalable. Then they take it to market and find out that users don’t really want it, they either don’t adopt it or don’t want to pay for it. The disconnect is that the product is based on what the founders think the market needs instead of what the market actually says it needs, and those are two very different things. It’s very rare for founders to guess what the market needs right like that out of the gate. In fact, most companies go through multiple pivots. Uber originally was a limo sharing service, Twitter was a podcast subscription platform, Paypal was a PDA “beam” payment platform, etc.
Ultimately, what you have to remember is that the most beautiful, polished, full featured app that doesn’t solve a market need, still ends up in the trash can. So with an MVP, you figure out what is the minimum product that you can build to start testing and validating your assumptions with actual market feedback.
Now that you know what you need to build to get some learned feedback, let’s discuss the two main models for getting software developed: Fixed-bid vs Iterative, or Waterfall vs Agile.
With this model you know exactly what you want, you go to an agency, describe in detail what you are looking for, and they then build it for X number of months. It’s the same as Watefall if you had an in house team, you would describe what you want for version 1.0, and they would go off and build it. The main thing to understand is that with this model you are pre-planning your product a few months in advance.
With iterative or Agile development, you frequently change what you want your developers working on based on changing priorities. In it’s most basic form, you are working closely with a developer and just meet/email them with changes that you want. In a more formal setting with a larger team it’s Agile with 2 weeks sprints or Kanban type of workflow.
Just like we discussed above, this all comes down to the type of business that you are running. If you know exactly what you want to end up with, you can go with a fixed-bid project, where all the costs and times are mostly knowable upfront. If you are selling burgers and need a McDonalds type of restaurant, you get quotes and have builders build it for you.
But in my experience, fixed-bid is the wrong way to go for most early stage startups. As we discussed above, there is just too much shifting that goes on too frequently. You could potentially fixed-bid outsource your MVP, but since you are thinking up of all the possible features you will need up front, those MVPs end up being too bloated. And once the MVP is done, you will then need to switch to iterative mode anyway. An MVP doesn’t mean that your product is done any more than a child that enters first grade is done with school. An MVP is just a first step to get some validated learning, then it’s a continuous process to learn more until you finally get to a business model, and that can take years.
At one of my startups, initially the founder came to me to help review a proposal. It was a fixed bid proposal with an agency, 7-8 months of development at $400k. We ended up turning down the proposal, and it’s a good thing that we did. What we had at 7 months was nothing like what the founder wanted in the proposal. If he would have gone with the agency, he would been on the phone with them in one month asking to change half of the proposal.
This isn’t an absolute truth. I had someone come to me with a desktop application that they wanted to convert to a web based app. They still had documentation from the original project describing all the business logic and you could reference their existing software. If you can plan out your entire business in great detail and know for sure that it will not change during those months, fixed bid is the right option. But most startups are very dynamic and chaotic, and they need a dynamic development model to complement the business side.
So now that you know what to build and how to build it, the next step is to figure who should build it.
There are many software development companies to choose from. You can google for the specific type of a agency you are looking for, and use sites like https://clutch.co/ to read some reviews about them. Agencies typically like to do fixed bid projects, but they will generally go along with whatever you want – team building, outstaffing, etc. Agencies are typically more expensive and rigid than individual developers. For example, with a freelancer you can arrange 30 hour weeks, temporarily pause a project, or have them work extra hours when needed. With an agency the developers are just full time employees of the agency, they are interested in having them work a steady 40 hours a week, with a normal schedule and a familiar workflow process.
They are lower cost, more flexible, and deliver quicker overall. But you are also getting fewer guarantees. A freelancer might be great or might just waste a ton of your time and money. They are also less reliable, a freelancer might find a full time job or juggle too many projects and end up disappearing on you at any time.
You can find freelancers on https://www.upwork.com/ or sites like https://www.toptal.com/. Toptal is more expensive, but vets their freelancers for you.
They are reliable and have an interest in delivering high quality products, since they will be on the project long term. Costs overall will be similar to freelancers, it’s less per hour, but you have to pay benefits. You can also try to reduce the cost by giving away equity. The major downside with employees is lack of flexibility. You can ask a freelancer to reduce hours and wait, you can’t do that with an employee. It’s also much easier to let go of freelancers. An employee is a long term commitment.
If you are just starting you might consider bringing on a technical co-founder. It’s a really great choice for non-tech people and can really simplify software development for your startup. If the technical co-founder is good, they will be able to recruit other developers later on as well. You have to make sure to really vet them, a bad partnership is going to be bad with a technical co-founder, just like with anyone else. And of course, you will have to give up some of your company.
That again depends on your situation. Bringing on a great tech co-founder for a tech company is extremely valuable and will save you a lot of headache in the long term. But it’s risky, a bad co-founder can jeopardize your entire company, the 3rd reason on the list for startup failures is the wrong team. If your tech co-founder can’t deliver on the MVP, ends up being a bad team player, or (in one case I witnessed) is more committed to the code than the product, it’s going to end poorly. And again, it really depends on your situation, if you just have an idea and no funding, a tech co-founder might be a must, but if you are well capitalized you might not want to go that route.
In general, if I was in the shoes of a founder with an early stage startup, I would go with freelancers. It goes along with the rest of the article, there’s too much uncertainty and you need the flexibility early on. Agencies are rigid and more expensive, they are generally better for the fixed-bid model. In house employees are more stable and long term, but they require a commitment, which you can make when you have a much better understanding of how your business functions.
For example, eventually you will make connections with developers, have a company culture, and be able to hire the right in house developers for your business. But early on, you probably will not know how to vet developers correctly. With a freelancer, you can give them some milestones, some test assignments, watch how you work together and switch them out very quickly, if necessary. With in house developers you don’t have that flexibility, hiring full time is a serious, lengthy process.
Hopefully, I explained some of the basics enough for you to get started with software development for your startup. The next step is for you to define what you want to build. Figure out what assumptions you are making about your business and customers. Figure out what’s the quickest way for you to test those assumptions and start engaging your customers. If it’s custom software (it’s not always custom software), spec out exactly what you want built for your MVP. Then figure out who you want to hire and go from there….
The post Software Development Basics For Non-tech Founders appeared first on QueWorx.
]]>A Simple Blog With Comments on Django: Development and Deployment for the Smallest Ones Read More »
The post A Simple Blog With Comments on Django: Development and Deployment for the Smallest Ones appeared first on QueWorx.
]]>I assume that the reader is already familiar with Python syntax, has a minimal understanding of Django (it’s a good idea to start with tutorials at http://codeacademy.com on the appropriate topic and read a tutorial on Django), and also knows how to work on the command line.
So, let’s start by organizing the working environment on a local computer. In principle, any operating system that you feel comfortable in will work for our purposes. Here, I describe the process for GNU / Linux, for other systems the steps may differ slightly. The system must have virtualenv installed, a utility for creating an isolated working environment (so that the libraries we use do not interfere with other programs and projects).
Create and activate an environment:
mkdir ~/projects
cd ~/projects
virtualenv env
source env/bin/activate
In Windows, the last command should be like this:
env\Scripts\activate
Install Django using the Python PIP package Manager.
pip install django
Create a new project. Let’s call it something original — for example, mysite.
django-admin.py startproject mysite && cd mysite
The script will work and create a mysite directory with another mysite directory and several *. py files inside. Use the script manage.py to create a django app named blog.
python manage.py startapp blog
Edit settings in the file mysite/settings.py (note: I mean ~/projects/mysite/mysite/settings.py) adding the following:
# coding: utf-8
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
In the first line, we specify the encoding in which we work, to avoid confusion and glitches, I suggest specifying it in all modified *. py files, changing them to be in UTF-8. BASE_DIR will store the full path to our project so that you can use relative paths for further configuration.
Let’s set up a database, in our project it is quite possible to use SQLite
DATABASES = { 'default':
{
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
Configure the time zone and language:
TIME_ZONE = 'Europe/Moscow'
LANGUAGE_CODE = 'ru-ru'
In order for Django to find out about the created app, add ‘blog’ to the INSTALLED_APPS tuple, and uncomment the ‘django’ string.contrib.admin’ to enable the built-in admin panel:
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'blog',
)
To make the admin panel work, edit mysite/urls.py
from django.conf.urls import patterns, include, url
from django.contrib import admin
admin.autodiscover() #function that automatically discovers admin.py files in our apps
urlpatterns = patterns('',
url(r'^admin/', include(admin.site.urls)), #URL of admin http://site_name/admin/
)
Create a model in blog/models.py
from django.db import models
class Post(models.Model):
title = models.CharField(max_length=255) # the title of the post
datetime = models.DateTimeField(u'Date of Publication') # date of publication
content = models.TextField(max_length=10000) # the text of the post
def __unicode__(self):
return self.title
def get_absolute_url(self):
return "/blog/%i/" % self.id
Based on this model, Django will automatically create tables in the database.
Register it in the admin panel blog/admin.py
from django.contrib import admin
from blog.models import Post # our model from blog/models.py
admin.site.register(Post)
Create tables with the command:
python manage.py syncdb
When you first call this command, Django will ask to create a superuser, you should do that.
Start the debug server that Django provides:
python manage.py runserver
Now enter the url in the browser
If everything went well, we should see this:
Go to the admin panel with the previously created username/password — now we can add and delete posts (buttons to the right of Posts)
Let’s create some posts for debugging.
Now let’s create a frontend. We need only two template pages — one with a list of all posts, the second – the content of the post.
Edit blog/views.py
from blog.models import Post
from django.views.generic import ListView, DetailView
class PostsListView(ListView): # list presentation
model = Post # model for representation
class PostDetailView(DetailView): # detailed view of the model
model = Post
Add this line to urlpatterns mysite/urls.py
url(r'^blog/', include('blog.urls')),
For all URLs starting with /blog/ to be processed using urls.py from the blog module, and create the file itself urls.py in the blog module with the following content:
#coding: utf-8
from django.conf.urls import patterns, url
from blog.views import PostsListView, PostDetailView
urlpatterns = patterns('',
url(r'^$', PostsListView.as_view(), name='list'), # that is, with URL http://site_name/blog/
# a list of posts will be displayed
url(r'^(?P<pk>\d+)/$', PostDetailView.as_view()), # and with URL http://site_name/blog/number/
# a post with a specific number will be displayed
)
Now you need to create page templates. By default, for the PostListView class, Django will search for a template in blog/templates/blog/post_list.html (such a long and strange path is associated with the logic of the framework, the developer can change this behavior, but in this article, I won’t touch on this)
Let’s create this file:
{% block content %}
{% for post in object_list %}
<p>{{ post.datetime }}</p>
<h2><a href="{{ post.get_absolute_url }}">{{ post.title }}</a></h2>
<p>{{ post.content }}</p>
{% empty %}
<p>No Posts</p>
{% endfor %}
{% endblock %}
Ok, let’s try how it works by going to the URL at http://localhost:8000/blog/. If there are no errors, we will see a list of posts where the title of each post is a link. For now these links lead nowhere, we need to fix that. By default, for the PostDetailView class, the template is located in blog\templates\blog\post_detail.html.
Let’s create it:
{% block content %}
<p>{{ post.datetime }}</p>
<h2>{{ post.title }}</h2>
<p>{{ post.content }}</p>
{% endblock %}
And again check: http://localhost:8000/blog/1/
We will add the ability to comment on our post. for this purpose, we will use the DISQUS services, which we will install using pip
pip install django-disqus
This module provides comments functionality with anti-spam protection, avatars, etc., and also takes care of comment storage:
Add to post_detail.html before {% endblock %}
<p>
{% load disqus_tags %}
{% disqus_dev %}
{% disqus_show_comments %}
</p>
In INSTALLED_APPS, in settings.py add ‘disqus’
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'blog',
'disqus',
)
And also add to settings.py
DISQUS_API_KEY = '***'
DISQUS_WEBSITE_SHORTNAME = '***'
The last two values are obtained by registering on http://disqus.com.
Test the project in the browser. Great, the functionality of our app is impressive, but we need to do something about the design. The easiest, and at the same time modern, option is to use Twitter Bootstrap.
Download the archive http://twitter.github.io/bootstrap/assets/bootstrap.zip and unzip it to the static directory of our project (I mean ~/projects/mysite/static – create it)
Edit settings.py so that Django knows where to look for static pages.
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
Create a blog/templates/blog/base.html with the following content
<!DOCTYPE html>
<html lang="ru">
<head>
<meta charset="utf-8" />
<title>MyBlog</title>
<link href="{{STATIC_URL}}bootstrap/css/bootstrap.css" rel="stylesheet">
<style>
body {
padding-top: 60px; /* 60px to make the container go all the way to the bottom of the topbar */
}
</style>
<link href="{{STATIC_URL}}bootstrap/css/bootstrap-responsive.css" rel="stylesheet">
<!--[if lt IE 9]>
<script src="http://html5shim.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
<script src="{{STATIC_URL}}bootstrap/js/bootstrap.js" type="text/javascript"></script>
{% block extrahead %}
{% endblock %}
<script type="text/javascript">
$(function(){
{% block jquery %}
{% endblock %}
});
</script>
</head>
<body>
<div class="navbar navbar-inverse navbar-fixed-top">
<div class="navbar-inner">
<div class="container">
<div class="brand">My Blog</div>
<ul class="nav">
<li><a href="{% url 'list' %}" class="">List of posts</a></li>
</ul>
</div>
</div>
</div>
<div class="container">
{% block content %}Empty page{% endblock %}
</div> <!-- container -->
</body>
</html>
This is the basic template for our pages, include it in our post_list.html and post_detail.html by adding this first line into them
{% extends 'blog/base.html' %}
Check that everything works. Now that the design is set, you can start deploying the app on a free cloud hosting service.
Register a free N00b account on PythonAnywhere. I like this service for ease of installation of Django. Everything happens almost the same as on the local computer.
Let’s say we created a user in PythonAnywhere with the name djangotest, then our application will be located at djangotest.pythonanywhere.com. Note: replace ‘djangotest’ with your PythonAnywhere username everywhere in the text below.
Change in settings.py
DEBUG = False
and add
ALLOWED_HOSTS = ['djangotest.pythonanywhere.com']
Upload files to the host in any of the possible ways.
In my opinion, for an inexperienced user, the easiest way is to archive the project folder, upload the archive to the server (in the Files->Upload a file section) and unzip it on the server using the command in the bash shell (in the Consoles -> bash Section):
For example, if we upload mysite.tar.gz, run this in the PythonAnywhere console
tar -zxvf mysite.tar.gz
Now we configure the working environment on the server, run this in the PythonAnywhere console:
virtualenv env
source env/bin/activate
pip install django django-disqus
Configure static pages in the Web -> Static files section:
The first line — the place where bootstrap is, in the second – static files of the built-in Django admin panel.
Configure WSGI (Web -> It is configured via a WSGI file stored at: …):
activate_this = '/home/djangotest/env/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
import os
import sys
path = '/home/djangotest/mysite'
if path not in sys.path:
sys.path.append(path)
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
Click the button Web -> Reload djangotest.pythonanywhere.com
Go to djangotest.pythonanywere.com/blog / – congratulations, it wasn’t easy, but You did it. Now You have your own cozy blog, developed with your own hands on the most modern web technologies!
Written by Станислав Фатеев, translated from https://habr.com/en/post/181556/
The post A Simple Blog With Comments on Django: Development and Deployment for the Smallest Ones appeared first on QueWorx.
]]>Sending Emails Using asyncio and aiohttp From a Django Application Read More »
The post Sending Emails Using asyncio and aiohttp From a Django Application appeared first on QueWorx.
]]>I develop and support the notification service at Ostrovok.ru. The service is written in Python3 and Django. In addition to transactional emails, push notifications, and messages, the service also takes care of mass emailing of marketing offers (not spam! trust me, unsubscribe works better than subscribe on our service) for users who have given their consent. Over time, the database of active recipients grew to more than a million email addresses, which the email service was not ready for. I want to talk about how new Python features allowed us to speed up mass emailing and save resources, and what problems we encountered when working with them.
Initially, we implemented mass emailing with the simplest solution: for each recipient, a task was place in a queue, where one of 60 workers (a feature of our queues is that each workers runs in a separate process) prepared the context, rendered the template, sent an HTTP request to Mailgun to send the email, and created a record in the database that the email was sent. The entire process took up to 12 hours, sending about 0.3 emails per second from each worker and blocking emails for small campaigns.
Quick profiling showed that workers spent a large amount of time on setting up connections with Mailgun, so we started grouping tasks into chunks, one chunk for each worker. Workers began using a single connection with Mailgun, which dropped the time of emailing the list to 9 hours, each worker sending an average of 0.5 emails per second. Subsequent profiling again showed that network requests still took the majority of the time, which led us to the idea of using asyncio.
Before putting all the processing in an asyncio loop, we had to solve several problems:
Taking all this into account, we will create our own asyncio loop inside each of the workers with the ThreadPool type of pattern consisting of:
def get_campaign_send_data(ids: Iterable[int]) -> Iterable[Mapping[str, Any]]:
""" We generate email data, here we work with Django ORM and template rendering."""
return [{'id': id} for id in ids]
async def mail_campaign_producer(ids: Iterable[int], task_queue: asyncio.Queue) -> None:
"""
We group recipients into subchannels and generate data for them to send,
which we place in the queue. Data generation requires working with the
database, so we perform it in ThreadPoolExecutor.
"""
loop = asyncio.get_event_loop()
total = len(ids)
for subchunk_start in range(0, total, PRODUCER_SUBCHUNK_SIZE):
subchunk_ids = ids[subchunk_start : min(subchunk_start + PRODUCER_SUBCHUNK_SIZE, total)]
send_tasks = await loop.run_in_executor(None, get_campaign_send_data, subchunk_ids)
for task in send_tasks:
await task_queue.put(task)
async def send_mail(data: Mapping[str, Any], session: aiohttp.ClientSession) -> Union[Mapping[str, Any], Exception]:
""" Sending a request to an external service."""
async with session.post(REQUEST_URL, data=data) as response:
if response.status_code != 200:
raise Exception
return data
async def mail_campaign_sender(
task_queue: asyncio.Queue, result_queue: asyncio.Queue, session: aiohttp.ClientSession
) -> None:
"""
Getting data from the queue and sending network requests.
Don't forget to call task_done so that the calling code will know when
the email is sent..
"""
while True:
try:
task_data = await task_queue.get()
result = await send_mail(task_data, session)
await result_queue.put(result)
except asyncio.CancelledError:
# Correctly processing cancellation of the coroutine
raise
except Exception as exception:
# Processing errors in email sending
await result_queue.put(exception)
finally:
task_queue.task_done()
def process_campaign_results(results: Iterable[Union[Mapping[str, Any], Exception]]) -> None:
"""We process the results of transmission: exception and success and write them to the database"""
pass
async def mail_campaign_reporter(task_queue: asyncio.Queue, result_queue: asyncio.Queue) -> None:
"""
We group reports into a list and pass them to ThreadPoolExecutor for processing, to write emailing results to the database.
"""
loop = asyncio.get_event_loop()
results_chunk = []
while True:
try:
results_chunk.append(await result_queue.get())
if len(results_chunk) >= REPORTER_BATCH_SIZE:
await loop.run_in_executor(None, process_campaign_results, results_chunk)
results_chunk.clear()
except asyncio.CancelledError:
await loop.run_in_executor(None, process_campaign_results, results_chunk)
results_chunk.clear()
raise
finally:
result_queue.task_done()
async def send_mail_campaign(
recipient_ids: Iterable[int], session: aiohttp.ClientSession, loop: asyncio.AbstractEventLoop = None
) -> None:
"""
Creates a queue and starts workers for processing.
Waits for recipients to be generated, then waits for reports to be sent and saved.
"""
executor = ThreadPoolExecutor(max_workers=PRODUCERS_COUNT + 1)
loop = loop or asyncio.get_event_loop()
loop.set_default_executor(executor)
task_queue = asyncio.Queue(maxsize=2 * SENDERS_COUNT, loop=loop)
result_queue = asyncio.Queue(maxsize=2 * SENDERS_COUNT, loop=loop)
producers = [
asyncio.ensure_future(mail_campaign_producer(recipient_ids, task_queue)) for _ in range(PRODUCERS_COUNT)
]
consumers = [
asyncio.ensure_future(mail_campaign_sender(task_queue, result_queue, session)) for _ in range(SENDERS_COUNT)
]
reporter = asyncio.ensure_future(mail_campaign_reporter(task_queue, result_queue))
# We are waiting for all the letters to be prepared
done, _ = await asyncio.wait(producers)
# When all sends are completed, we stop the workers
await task_queue.join()
while consumers:
consumers.pop().cancel()
# When report saving is complete, we also stop the corresponding worker
await result_queue.join()
reporter.cancel()
async def close_session(future: asyncio.Future, session: aiohttp.ClientSession) -> None:
"""
Close the session when all processing is complete.
The aiohttp documentation recommends adding a delay before closing the session.
"""
await asyncio.wait([future])
await asyncio.sleep(0.250)
await session.close()
def mail_campaign_send_chunk(recipient_ids: Iterable[int]) -> None:
"""
Entry point for starting a mailing list.
Accepts recipient IDs, creates an asyncio loop, and starts the sending coroutine.
"""
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# Session
connector = aiohttp.TCPConnector(limit_per_host=0, limit=0)
session = aiohttp.ClientSession(
connector=connector, auth=aiohttp.BasicAuth('api', API_KEY), loop=loop, read_timeout=60
)
send_future = asyncio.ensure_future(send_mail_campaign(recipient_ids, session, loop=loop))
cleanup_future = asyncio.ensure_future(close_session(send_future, session))
loop.run_until_complete(asyncio.wait([send_future, cleanup_future]))
loop.close()
After implementing this solution, the time for sending mass emails was reduced to an hour with the same volume of emails and 12 workers involved. That is, each worker sends 20-25 emails per second, which is 50-80 faster than the original solution. The memory consumption of the workers remained at the same level, the processor load increased slightly, and the network utilization increased by many times, which is the expected effect. The number of connections to the database has also increased since each of the threads of production workers and workers who save reports is actively working with the database. At the same time, the released workers can send out smaller emailing lists while the mass campaign is being sent.
Despite all the advantages, this implementation has a number of issues that must be taken into account:
I hope our experience will be useful to you! If you have any questions or ideas, please write in the comments!
Written by Sergey, translated from here
The post Sending Emails Using asyncio and aiohttp From a Django Application appeared first on QueWorx.
]]>Hiring Developers. Tips From a Developer Read More »
The post Hiring Developers. Tips From a Developer appeared first on QueWorx.
]]>My impressions? I’m sad… Almost all of the articles, in my opinion, remind me of “bad advice”.
Just a warning, the whole article is a purely personal opinion, but supported by developer friends and colleagues.
And so…
HRs don’t kid yourself. You will never understand how good a developer is…
Unless, you can stick electrodes in their ear and start end-to-end testing… But since there is no such technology, all you can assess is the adequacy and, at least in part, the motivation of the person sitting in front of You.
And believe me, that’s enough.
After all, your task is to find a person who can join the team, work productively in it and have his work rewarded in the way that he expects, and that your company can provide (money, recognition, exciting projects, etc.).
All attempts to ask some technical nuances will look inappropriate and helpless. Personally, I am very much annoyed when I am asked about something that they do not understand themselves. I just want to get up and leave.
What else can you ask at the first stage? It depends on the specifics of the job.
If you need an experienced person — ask about the experience, find out what problems they solved, what difficulties they overcame.
If you need a person, who can be trained, give them a couple of logical tasks, check the performance of their brain. The information collected in the first stage will be enough to screen out 80% – 90% of candidates.
DO NOT ASK THEORY outside the context of the practical experience of a particular developer!
Personally, I know several people who studied with me as developers. They had all the theoretical stuff bouncing off their teeth, but when it came to real programming, they couldn’t do anything useful.
Ask for technical nuances from their previous experience, especially those that overlap with future work.
By the way the person talks about it, it will be clear:
And I think that’s enough to make the final choice.
You can only learn more about the person during the trial period.
I hope this material will be useful to someone, thank you for your attention.
Written by Константин, translated from here
The post Hiring Developers. Tips From a Developer appeared first on QueWorx.
]]>Why You Should Try FastAPI Read More »
The post Why You Should Try FastAPI appeared first on QueWorx.
]]>FastAPI — a relatively new web framework written in the Python programming language for creating a REST (and if you try really hard, then GraphQL) API, based on new features of Python 3.6+, such as: type-hints, native synchronicity (asyncio). Among other things, FastAPI tightly integrates with OpenAPI-schema and automatically generates documentation for your API via Swagger and ReDoc.
FastAPI is based on Starlette and Pydantic.
Starlette — an ASGI micro framework for writing web applications.
Pydantic — a library for parsing and validating data based on Python type-hints.
“[…] I’m using fastapi a ton these days. […] I’m actually planning to use it for all of my team’s ML services at Microsoft. Some of them are getting integrated into the core Windows product and some Office products.”
Kabir Khan — Microsoft (ref)
“If you’re looking to learn one modern framework for building REST APIs, check out FastAPI. […] It’s fast, easy to use and easy to learn. […]”
“We’ve switched over to FastAPI for our APIs […] I think you’ll like it […]”
Ines Montani — Matthew Honnibal — Explosion AI founders — spaCy creators (ref) — (ref)
I will try to show you how to create a simple but useful API with documentation for developers. We will write a random phrase generator!
pip install wheel -U
pip install uvicorn fastapi pydantic
New module!
Uvicorn — this is an ASGI-compatible web server that we will use to run our application.
First, let’s create the basis of our application.
from fastapi import FastAPI
app = FastAPI(title="Random phrase")
This app already works and can be started.
Run the following command in your terminal and open the page in the browser at the address http://127.0.0.1:8000/docs.
uvicorn <your filename>:app
But so far, our app doesn’t have any endpoints — let’s fix that!
Since we’re writing a random phrase generator, we obviously have to store the phrases somewhere. For that, I chose a simple python-dict.
Let’s create the file db.py and start writing code.
Import the necessary modules:
import typing
import random
from pydantic import BaseModel
from pydantic import Field
Then – we will designate two models: the input phrase (the one that the user will send to us) and the “output” (the one that we will send to the user).
class PhraseInput(BaseModel):
"""Phrase model"""
author: str = "Anonymous" # author name. If not passed, the standard value is used.
text: str = Field(..., title="Text", description="Text of phrase", max_length=200) # The text of the phrase. The maximum value is 200 characters.
class PhraseOutput(PhraseInput):
id: typing.Optional[int] = None # ID of phrases in our database.
After that, we will create a simple class to work with the database:
class Database:
"""
Our **fake** database.
"""
def __init__(self):
self._items: typing.Dict[int, PhraseOutput] = {} # id: model
def get_random(self) -> int:
# Getting a random phrase id
return random.choice(self._items.keys())
def get(self, id: int) -> typing.Optional[PhraseOutput]:
# Getting a phrase by id
return self._items.get(id)
def add(self, phrase: PhraseInput) -> PhraseOutput:
# Adding a phrase
id = len(self._items) + 1
phrase_out = PhraseOutput(id=id, **phrase.dict())
self._items[phrase_out.id] = phrase_out
return phrase_out
def delete(self, id: int) -> typing.Union[typing.NoReturn, None]:
# Deleting a phrase
if id in self._items:
del self._items[id]
else:
raise ValueError("Phrase doesn't exist")
Now we can start writing the API itself.
Let’s create the file main.py and import the following modules:
from fastapi import FastAPI
from fastapi import HTTPException
from db import PhraseInput
from db import PhraseOutput
from db import Database
Initialize our application and database:
app = FastAPI(title="Random phrase")
db = Database()
And let’s write a simple method for getting a random phrase!
@app.get(
"/get",
response_description="Random phrase",
description="Get random phrase from database",
response_model=PhraseOutput,
)
async def get():
try:
phrase = db.get(db.get_random())
except IndexError:
raise HTTPException(404, "Phrase list is empty")
return phrase
As you can see, I also specify some other values in the decorator to generate pretty documentation You can look at all the possible parameters in the official documentation.
In this piece of code, we try to get a random phrase from the database, and if the database is empty, we return an error with the code 404.
Similarly, we write other methods:
@app.post(
"/add",
response_description="Added phrase with *id* parameter",
response_model=PhraseOutput,
)
async def add(phrase: PhraseInput):
phrase_out = db.add(phrase)
return phrase_out
@app.delete("/delete", response_description="Result of deletion")
async def delete(id: int):
try:
db.delete(id)
except ValueError as e:
raise HTTPException(404, str(e))
That’s all! Our small but useful API is ready!
Now we can launch the app using uvicorn, open the online documentation (http://127.0.0.1/docs), and try our API!
Of course, I couldn’t tell you about all the features of FastAPI, such as: smart DI system, middlewares, cookies, standard authentication methods in the API (jwt, oauth2, api-key) and much more!
But the purpose of this article is not so much to review all the features of this framework, but rather to encourage you to explore it yourself. FastAPI has excellent documentation with a bunch of examples.
Code from the Github article
Official documentation
Repository on Github
Written by prostomarkeloff, translated from here
For additional information check out this tutorial on how to build a high performing app in FastAPI from Toptal.
The post Why You Should Try FastAPI appeared first on QueWorx.
]]>The post React Native – A Silver Bullet for All Problems? How We Choose a Cross-Platform Tool for Profi.ru appeared first on QueWorx.
]]>It all started with our decision to speed up development by 10 times at our company. We set an impossible goal to go beyond our familiar surroundings and try new things. All the development teams at Profi.ru took on experiments. At that time, the company had 13 native mobile developers, including two team leaders and me. My team worked on two mobile apps. In the first, clients are looking for specialists; in the second – specialists are looking for clients. For me, this period was incomprehensible and emotionally stressful. I feel like we already did enough to make everything work quickly.
We used a common architecture throughout projects and kept the code clean. We used generators to create all the module files. We tried to move all the business logic to the backend. We set up CI/CD and covered the applications with end-to-end tests. Because of all this, some apps were released steadily once a week. I had no idea how to speed up development even by two times. How can we possibly do 10? And so, we wrote down what is important to us.
After a little research, we settled on three candidates: React Native, Flutter, Kotlin / Native. Neither Flutter nor Kotlin Native can be released quickly. And we thought that was probably the most important thing on our list. Also, those technologies were quite raw at the time. We settled on React Native — we can release instantly on it. Plus, most of our developers have already used React.
In General, I had a negative attitude towards cross-platform tools – like most native Mobile developers. Go to any Mobile conference and talk about it — you will immediately be pelted with stones. I like to do that myself: -) So to confirm or refute our concerns, we conducted our own investigation.
We have studied examples of React Native use in various companies -successful and not so much. With our head of development, Boris Egorov, we carefully read more than three dozen articles and other docs. In some, we discussed every paragraph. At the end of the article – we looked at the most interesting links. We noted things that can speed us up, possible risks and issues. After that, we talked to developers from three companies. In each, the guys created a mass product and worked with React Native for at least a year.
The pros were pretty obvious..
The list of risks was longer
The first risk. Instead of one platform, we have to support three in the long run: Android, iOS, and React Native.
Sometimes the developer screen looks something like this:
Reality. One of the developers we talked to was implementing React Native into existing code. Yes, there is a full-fledged third framework, but you don’t get away from native development. His team had to synchronize the state between React Native and native code. This involved a lot of switching between different parts of the code / different paradigms and IDEs. So they decided to write a new project from scratch, create a framework on React Native, and insert already made native pieces where they need them. It got better.
The second risk. React Native Black Box – sometimes there are situations when the developer does not understand what caused a bug. You have to search everywhere: in the React Native code, in the native part of the product, or in the React Native framework itself.
Reality. The guys we talked to were putting logs and different tools on the app: Crashlytics, Kibana, and so on. Problems remain, but it becomes clearer where they occur.
The third risk. In the articles, it was often mentioned that React Native is suitable for small projects, but not for large products with platform functionality.
Reality. We looked at the market to see if any big companies work with React Native. Turns out there are dozens, if not hundreds. Including Skype, Tesla, Walmart, Uber Eats and “Кухня на районе.”
The fourth risk. The project may break with any operating system update from Apple or Google.
Reality. We decided the risk was acceptable. The same risk exists for native development. When a new OS for iOS and Android comes out, you adapt your app to it.
The fifth risk. There is no support for a 64-bit system in Android, and the issue has been open since 2015. And since August 2019, Google Play has not accepted assemblies that support only 32-bit systems.
Reality. We looked at an issue that the React Native team was working on in the summer of 2018. They promised to add support in the next release, although they still haven’t fully fixed 64-bit system support. It was very upsetting. Support was then added, but some Android devices fail after the transition. As we found out later, the percentage is insignificant, but it was still the most painful point for me.
The sixth risk. The likelihood that tomorrow Apple or Google will release a new version of their OS and break React Native. Or a new technology that Profi.ru can’t support.
Reality. There are no guarantees for many other companies or for us. You either realize the risk and work with it, or you try something else. We decided to work with it, and we decided to solve all the problems as they came in.
The seventh risk. We could not tell in advance how fast React Native would be compared to a native application and what performance we could expect.
Reality. A verbatim quote from one of our conversations – “when scrolling, lists of elements of variable height slowed down.” We decided to test it in practice. Moving a little ahead – at the time of writing the first prototype of the application, we did not see that problem, but when developing a full-fledged application, there were many questions about the performance of React Native.
The eighth risk. It’s not clear how quickly we can find React Native developers. On HeadHunter, I saw about 300 resumes, even though there were more than 150 thousand developers for iOS.
Reality. We didn’t go too much into it, as we had already hired React developers many times and knew what to look for. We decided that as a last resort, we can retrain React-developers in React Native.
There was also a risk that someone would leave the team, as mobile developers do not like this technology. I was right, by the way. Someone’s gone
We discussed the results of the investigation with the company’s founders Sergey Kuzneсov and Yegor Rudy and got the go-ahead to conduct the experiment.
We decided to create a new product from scratch instead of inserting it into an existing one. Also, we didn’t want to touch our ‘finding client’ app. It was quite finished, and economically it did not make sense to change something radically. Also, it was essential for us that the client application had its own native experience for both iOS and Android.
We wanted to change the app for finding specialists drastically. In contrast to the client app, we did not mind that the specialists will have the same experience of interaction for iOS and Android applications. Plus, we believe that the product for specialists can do without animation and visual effects. But before switching the whole team to the new technology, it was necessary to feel out how it works.
In December 2018, we assembled a team of three people. Two React-developers and one native developer, me. I understand how Android works and am well versed in iOS development.
As part of the experiment we wanted to check the following items:
We got the first results within a month and a half after diving into development.
I want to go back to the title. So is React Native a silver bullet for all problems? We decided for ourselves that no, it isn’t. At the same time, we got what we wanted. We have increased speed of delivery of features several times and now we can release to all users every day. It is also important that the company has cross-functional teams, where each developer writes code, both in Android/iOS and on the web.
And yes, the apps are in stores
Written by Gevorg Petrosian, translated from here
The post React Native – A Silver Bullet for All Problems? How We Choose a Cross-Platform Tool for Profi.ru appeared first on QueWorx.
]]>Getting Started with Interactive Brokers API in Java Read More »
The post Getting Started with Interactive Brokers API in Java appeared first on QueWorx.
]]>First download and install Trader Workstation from the interactive brokers site – here.
Then grab the API from here. You are just looking for the TwsApi.jar from that package, so you can add it to your project.
You’ll also want to start TWS, go into configurations -> API -> Settings and check Enable Active X and Socket client
. Take note of the socket port as well, you will need it later.
We’ll start by adding a broker class to wrap all the Interactive Brokers API code, this is how our application will call IB. Let’s start by adding a connect() and disconnect() function, so your class should start like this:
(IBBroker.java)
import com.ib.client.EClientSocket;
public class IBBroker {
private EClientSocket __clientSocket;
public IBBroker(EClientSocket clientSocket, IBDatastore ibDatastore) {
__clientSocket = clientSocket;
__ibDatastore = ibDatastore;
}
public void connect() {
// ip_address, port, and client ID. Client ID is used to identify the app that connects to TWS, you can
// have multiple apps connect to one TWS instance
__clientSocket.eConnect("127.0.0.1",7497, 1);
}
public void disconnect() {
__clientSocket.eDisconnect();
}
}
To get messages/data from Interactive Brokers we have to implement their EWrapper interface.
(IBReceiver.java)
import com.ib.client.*;
import java.util.Set;
public class IBReceiver implements EWrapper {
@Override
public void tickPrice(int i, int i1, double v, int i2) {
}
.......
}
There are going to be lots of methods that we have to override, but technically we don’t have to fill out any of them, since they are all void
. We definitely want to implement the error() functions, since we want to know when something goes wrong. Other than that, we will just implement functions as we need them.
We also want to add a data store class that will hold all the data that comes back or we set for IB. That way IBBroker and IBReceiver will be able to use the same data, plus you can pass this data store to any other class and they don’t have to know about IBBroker or IBReceiver.
(IBDatastore.java)
import java.util.HashMap;
public class IBDatastore {
public Integer nextValidId = null;
private HashMap<Integer, Tick> __ticks = new HashMap<Integer, Tick>();
public Tick getLatestTick(int symbolId) {
return __ticks.get(symbolId);
}
}
And finally we tie everything together so that everything is connected:
(Main.java)
package com.queworx;
import com.ib.client.EClientSocket;
import com.ib.client.EJavaSignal;
import com.ib.client.EReaderSignal;
public class Main {
public static void main(String[] args) {
// Signal processing with TWS, we will not be using it
EReaderSignal readerSignal = new EJavaSignal();
IBDatastore ibDatastore = new IBDatastore();
IBReceiver ibReceiver = new IBReceiver(ibDatastore);
EClientSocket clientSocket = new EClientSocket(ibReceiver, readerSignal);
IBBroker ibBroker = new IBBroker(clientSocket, ibDatastore);
try
{
ibBroker.connect();
// Wait for nextValidId
for (int i=0; i<10; i++) {
if (ibDatastore.nextValidId != null)
break;
Thread.sleep(1000);
}
if (ibDatastore.nextValidId == null)
throw new Exception("Didn't get a valid id from IB");
// From here you can add the logic of your application
}
catch(Exception ex)
{
System.err.println(ex);
}
finally
{
ibBroker.disconnect();
System.exit(0);
}
}
}
Notice, that before we issue any requests to IB we wait for nextValidId
to be set. We use that Id when creating an order, but in general it indicates that the connection has been established and TWS is ready to receive requests.
We will be using our broker to request quote information. We have to create a Contract and pass it to reqMktData. We also need to give unique int Ids to our instruments, IB will be giving those Ids back to us in the callback.
(IBBroker.java)
...
public void subscribeQuoteData(int tickerId, String symbol, String exchange) {
// full doc here - https://interactivebrokers.github.io/tws-api/classIBApi_1_1Contract.html
Contract contract = new Contract(0, symbol, "STK", null, 0.0d, null,
null, exchange, "USD", null, null, null,
"SMART", false, null, null);
// We are asking for additional shortable (236) and fundamental ratio (258) information.
// The false says that we don't want a snapshot, we want a streaming feed of data.
// https://interactivebrokers.github.io/tws-api/classIBApi_1_1EClient.html#a7a19258a3a2087c07c1c57b93f659b63
__clientSocket.reqMktData(tickerId, contract, "236,258", false, null);
}
...
For receiving information we will need to fill out tickPrice(), tickSize(), and tickGeneric() in IBReceiver to get the extra info we requested. For example, to modify tickPrice():
(IBReceiver.java)
...
@Override
public void tickPrice(int tickerId, int field, double price, int canAutoExecute) {
if (field != 1 && field != 2 && field != 4)
return;
Tick tick = __ibDatastore.getLatestTick(tickerId);
if (field == 1)
tick.bid = price;
else if (field == 2)
tick.ask = price;
else if (field == 4)
tick.last = price;
tick.modified_at = System.currentTimeMillis();
}
...
The full list of field types are here: https://interactivebrokers.github.io/tws-api/tick_types.html
Let’s modify our IBBroker to be able to place orders.
(IBBroker.java)
...
private void createOrder(String symbol, String exchange, int quantity, double price, boolean buy)
{
// moved this out into it's own method
Contract contract = __createContract(symbol, exchange);
int orderId = __ibDatastore.nextValidId;
// https://interactivebrokers.github.io/tws-api/classIBApi_1_1Order.html
Order order = new Order();
order.clientId(__clientId);
order.transmit(true);
order.orderType("LMT");
order.orderId(orderId);
order.action(buy ? "BUY" : "SELL");
order.totalQuantity(quantity);
order.lmtPrice(price);
order.account(__ibAccount);
order.hidden(false);
order.minQty(100);
__clientSocket.placeOrder(orderId, contract, order);
// We can either request the next valid orderId or just increment it
__ibDatastore.nextValidId++;
}
...
Then on the receiver side we are going to be looking at the order status. The order status will be called when you submit the order and then any time anything changes. You might receive multiple messages for the same thing.
(IBReceiver.java)
...
@Override
public void orderStatus(int orderId, String status, double filled, double remaining, double avgFillPrice, int permId, int parentId, double lastFillPrice, int clientId, String whyHeld) {
/**
* Here we can check on how our order did. If it partially filled, we might want to resubmit at a different price.
* We might want to update our budget, so that we don't trade any more positions. Etc. All of this is a bit
* beyond the scope of this tutorial.
*/
}
...
The API itself is incredibly complicated, just as the TWS app itself is. You can trade various instruments – stocks, bonds, options, futures, etc. And there are all sorts of orders with all sorts of options. But this tutorial will hopefully get you started so that you can at least get something basic going and then add complexity to it as needed.
This tutorial’s code is on Github. If you need something more advanced, check out the full IB trader that I wrote a long time ago using the Groovy language.
Written by Eddie Svirsky
The post Getting Started with Interactive Brokers API in Java appeared first on QueWorx.
]]>When and how to use outstaffing services? Read More »
The post When and how to use outstaffing services? appeared first on QueWorx.
]]>When you have a project and need some software development done, you have a few options. You can hire employees, hire contractors, find a company that will do the project for you (outsource), or hire developers from another company to work for you (outstaff). These are just different models for hiring people to work on your software, each one with its own strengths and weaknesses, and you should use the appropriate one for your specific scenario.
Outsourcing is only really suitable when you have a well defined project to begin with, which is most often not the case. If you are building long term and your requirements are constantly changing, you want to control development. An ideal scenario for outsourcing, for example, would be adding an AI module to your current project. It’s a well defined project that you wouldn’t have the expertise in house to do, so you would set clear requirements and pass it off to a company that specializes in AI. They would then deliver a single self contained package and that specific engagement would be over.
If your use case doesn’t fit the outsourcing model then you have to consider hiring employees or contractors. Employees are permanent placements in your company. If you have an ongoing project, it makes sense for you to hire some employees to control development and keep knowledge in house. Contractors make sense when you are looking for a temporary engagement. For example, let’s say you have a tight deadline and you need more resources to shore up your team. Or, if you want an expert in some technology to come in, set it up, get the rest of your team up to speed on how to use it and then leave.
Outstaffing and hiring contractors are very similar. The only real difference is that you are either engaging contractors directly or going through an agency to engage them for you. The main benefit of going through an agency is that you don’t have to spend time doing recruitment, which is very time consuming. The agency, theoretically, already does recruitment full time and is good at screening candidates. They also have a large pool of proven candidates to call on. That also means that the agency will give you more flexibility to scale up or down than if you did it yourself.
You can outstaff both local or offshore staff. The big benefit of offshore staff is the massive reduction in costs. For the price of one employee, you can get 2 employees and still maintain the same level of quality. Your trade off is going to be language barriers and time zone issues.
A good model is hiring a combination of local and offshore resources to minimize the downsides, while still maintaining knowledge in house, and reducing costs. For example, a local team lead, that can communicate and manage the remote team. This is now becoming a great model since our remote tools are getting so much better (tools like Slack).
In general, choosing a good company is as essential as choosing a good developer. A bad outstaffing company will just try to fill bodies, and the quality of candidates that you will be getting will be sub-par.
The best way to find a good company, as always, is word of mouth. If someone you know is happy with a company, that’s a good indicator that the company takes its job seriously and can be trusted.
The next best way is user reviews, although these are not always reliable. In the US, these companies are known as “staffing agencies”. If you go to clutch.co you can see a large list of local and offshore staffing companies, with reviews. beststaffingagencies.com also has a large list of staffing agency reviews and scores.
If you are looking for offshore staff, you can also go on Upwork and engage with one of the outsourcing agencies there, they are all willing to outstaff as well as outsource and all have lots of reviews. Make sure to engage with a reputable company, and not just the lowest cost provider. The decision to do offshore or local staff is another big topic that will have to be discussed in a different article.
Written by Eddie Svirsky
The post When and how to use outstaffing services? appeared first on QueWorx.
]]>