Thursday, November 24, 2022
HomeData ScienceActual-Time Occasion Streaming with Kafka & BigQuery | by Tobi Sam |...

Actual-Time Occasion Streaming with Kafka & BigQuery | by Tobi Sam | Nov, 2022

A easy information engineering mission

Picture by Filip on Unsplash

Real-time streaming purposes may be difficult at instances, and when making an attempt to find out about them, deciding on a sensible use case is vital to foster a enjoyable & efficient studying expertise. So with the instance beneath, I hope it is possible for you to to understand the basics of constructing a real-time utility in a straightforward means.

Allow us to think about we work within the information engineering division of a music streaming service firm and we have to create a actual time dashboard that exhibits the preferred songs of a selected artist (let’s say Tony Allen) over time. To construct this, we are going to make use of a preferred distributed streaming platform, Kafka, to supply, devour and stream the mandatory track occasions into BigQuery so we will visualise the favored songs on a dashboard in Looker Studio.

Our structure will seem like this in the long run:

Actual-time streaming structure — Picture by creator

Phrases & Ideas

Let’s shortly outline some phrases which we are going to cowl on this article.

  • Kafka: Apache Kafka is an open-source, distributed streaming platform that allows (amongst different issues) the event of real-time, event-driven purposes, which is ideal for our use case.
  • Kafka cluster: A set of servers (known as Brokers), working collectively to offer excessive -availabililty, fault-tolerance, and storage for real-time purposes.
  • Dealer: As acknowledged above, a Dealer is a machine that does the precise work in a Kafka cluster. It hosts some set of partitions, handles incoming requests to put in writing new occasions to these partitions and, permits shoppers to fetch messages by matter, partition, and offset.
  • Matter: A subject is just a log of occasions. Each new occasion from a Producer is appended to the tip of a subject. And matters are divided into partitions.
Picture by creator
  • Producer: An utility you write that publishes (produces) information to a Matter in a Kafka cluster.
  • Shopper: An utility or end-user that retrieves information from Kafka clusters in actual time. For successfully fetching real-time messages, Kafka shoppers need to subscribe to the respective matters current within the cluster.
  • Zookeeper: Retains observe of the standing of the Kafka cluster nodes and it additionally retains observe of Kafka matters, partitions, and extra. (Be aware: An replace known as KIP-500 eliminated the necessity for Zookeeper, however we is not going to be utilizing that model of Kafka on this article).
  • Ballot: The ballot() technique is the perform a Kafka client calls to retrieve data from a given matter.

We are going to setup the structure above in 4 steps. However earlier than we start, be sure you have these conditions:


Kafka may be deployed in some ways, however we are going to deploy it utilizing Docker as it’s fairly easy.

Our Kafka cluster could have two major entities;

  • 1 Dealer occasion and
  • 1 Zookeeper occasion.
Easy Kafka Cluster — Picture by creator

We are going to use a single Docker compose file to configure and run these containers. You’ll discover the two providers and the required ports uncovered within the docker-compose.yaml file beneath:

Make sure the docker file is in the identical listing because the Kafka producer and client information that we are going to write shortly.

To construct each Docker containers, run this command and it is best to have the 2 containers up and operating inside a couple of minutes.

docker-compose up -d

We are going to write an utility/Producer that mimics person exercise on the music streaming platform. This utility will ship an occasion known as song-completed that’s triggered when a person completes a track. This occasion will probably be despatched to a Kafka matter which we are going to name tony-allen-plays .

An structure of a Producer and Shopper interacting with a Matter inside a Kafka cluster — Picture by creator

We are going to use the Faker bundle to generate faux streaming information for our utility. Our faux occasion payload will look one thing like this:

'artist': 'tony-allen',
'song_id': 03,
'song_name': 'girl',
'timestamp': '2022-11-03 07:22:13'}

To put in the Faker bundle, run this in a terminal window:

pip set up Faker

Generate a faux songs checklist

Now in our code, we are going to provoke the Faker object and create a hard-coded track checklist of 10 random songs of Tony Allen which will probably be a part of the occasion payload.

I picked out random songs out of his songs checklist on Google — screenshot by creator
from confluent_kafka import Producer
from faker import Faker
import json
import time
import logging

#Create Faker object to generate faux information for Producer

#Create Tony Allen track checklist
songs = ["zombie", "lady", "secret-agent","kindness","soldiers","asiko","the-same-blood","upside-down","african-man","vip"]

Configure log format

Each time a brand new occasion turns into obtainable, logs will probably be appended in a producer.log file — which we outline beneath — in your most important listing. Right here, we’re setting the essential configurations for the way we would like this log file to be formatted.

#Configure logger
logging.basicConfig(format='%(asctime)s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',

logger = logging.getLogger()

Provoke the producer

Provoke the Kafka producer object by specifying the port of your Kafka cluster as outlined within the Docker compose file above:

#Create Kafka Producer

Configure a callback

Outline a callback perform that takes care of acknowledging new messages or errors. When a sound message turns into obtainable, it’s decoded to utf-8 and printed in the popular format. The identical message can also be appended to the logs file.

#Callback perform
def receipt(err,msg):
if err just isn't None:
print('Didn't ship message: {}'.format(err))
message = 'Produced message on matter {} with worth of {}n'.format(msg.matter(), msg.worth().decode('utf-8'))

Write a producer loop

That is the enjoyable half! Right here we simply create a loop with a 3-second delay that mimics precise person exercise on the streaming platform. We create a schema for our JSON occasion and utilise Faker to generate the precise information factors.

#Write Producer loop 
def most important():
for i in vary(20):
random_song_id = faux.random_int(min=0, max=9)
'user_id': faux.random_int(min=20000, max=100000),
'artist': 'tony-allen',
'song_id': random_song_id,
'song_name': songs[random_song_id],
'timestamp': str(faux.date_time_this_month())
p.produce('tony-allen-plays', m.encode('utf-8'),callback=receipt)
p.ballot(1) # Polls/checks the producer for occasions and calls the corresponding callback features.
p.flush() #Look forward to all messages within the Producer queue to be delivered. Needs to be known as previous to shutting down the producer to make sure all excellent/queued/in-flight messages are delivered.

Discover that once we name p.produce we specify the Kafka matter to which we want to publish the message. On this case, it’s known as tony-allen-plays . Since this matter doesn’t exist in our Kafka cluster but, it’s robotically created the primary time this utility is run.

p.ballot is vital, as that checks the producer for occasions and calls the corresponding callback perform we outlined earlier.

Our full script ought to seem like this:

To substantiate that the producer is working as anticipated, run the next command in a terminal window:


It’s best to see the next output which prints out the occasions being despatched to the Kafka matter each 3 seconds.

Producer output in a terminal window — picture by creator

The buyer will do two main issues:

  • Ballot and retrieve occasions from the tony-allen-plays matter
  • Ship these occasions as a stream to BigQuery utilizing the BigQuery Streaming API

Set up the BigQuery Python library

To start, set up the BigQuery Python library with the next command.

pip set up google-cloud-bigquery

Then we will import it into the script and arrange the BigQuery configurations.

from confluent_kafka import Shopper
from import bigquery
import ast
from google.oauth2 import service_account

#Create BQ credentials object
credentials = service_account.Credentials.from_service_account_file('PATH-TO-BQ-SERVICE-ACCOUNT')

# Assemble a BigQuery consumer object.
bq_client = bigquery.Shopper(credentials=credentials)

#Speficy BigQuery desk to stream to

Provoke the Shopper

Subsequent, we provoke the Kafka client by specifying the port, after which we subscribe to the subject tony-allen-plays. When initiating the patron, we specify the patron groupid, as a result of all Kafka shoppers should belong to a client group.

print('Kafka Shopper has been initiated...')

#Subscribe to matter

Additionally, you will discover that there’s this attribute — auto.offset.reset — which is ready to ‘earliest’. It’s mainly telling the patron to devour from the start of the subject partition.

Kafka auto.offset.reset: earliest — Picture by creator

A typical Kafka client utility is centered round a devour loop. So the final step is to put in writing a loop that constantly polls the subject for brand new messages and if it finds any, it sends these messages to BigQuery.

Our full script ought to then seem like this:


Run the Kafka pipeline

Now that the patron and producer have been arrange, open up two separate terminal home windows and run the producer once more:


Then run the patron afterward in order that it reads information from the subject in actual time:


In case you see the messages generated by the producer begin exhibiting up within the client terminal window, then your client is working because it ought to and the info must also be getting streamed into BigQuery:

Kafka output — Picture by creator
Kafka occasions in BigQuery — Picture by creator

The final step will probably be to attach the BigQuery desk to Looker Studio and create a easy bar chart to visualise the favored songs in close to real-time.

Go to Looker Studio, sign-in and:

  • Choose a brand new “Clean Report”
  • Below “connect with information”, choose “BigQuery” as a knowledge supply
  • Then choose your BigQuery mission, dataset, and desk

It’s best to now be introduced with an analogous view. Make sure the dimensions and metrics fields match the screenshot beneath and it is best to have a easy bar chart as proven.

Looker Studio screenshot by creator. “Complete Performs” was renamed from “File Rely”. — picture by creator

Close to real-time dashboard

Looker Studio has a information freshness characteristic, which specifies how regularly the dashboard ought to be refreshed. You may set this to 1 minute which is probably the most frequent refresh cycle presently obtainable and your dashboard ought to refresh each 1 minute.

We coated the fundamentals of find out how to arrange a minimal Kafka cluster with Docker, load information into a subject, after which devour and stream that information to BigQuery. Lastly, we created a close to real-time dashboard to current the ultimate leads to Looker Studio.

I hope you discovered this convenient and I want you the most effective of luck constructing your subsequent actual time app! Be at liberty to share any suggestions or different use circumstances you will have.



Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments