Hackathon call:
Create a SQL Knowledge Graph of a Google's BigQuery dataset

Sponsored by the DBpedia Association

Sign up to participate in timbr.ai’s track to transform the GDELT repository of the world’s news or any other public dataset hosted in Google BigQuery into a knowledge graph, proving yourself as a knowledge engineer.

The great news is that all you need to know is SQL. You can learn to use the timbr platform in an extremely short time. Watch here how easy it is to use the timbr SQL Knowledge Graph platform to create SQL ontologies and map them to any BigQuery dataset allowing you to create the winning use-case.

A cash prize of EU 1500 will be awarded for the best GDELT use-case, and a prize of  EU 1200 will be awarded to the best public dataset use-case.

The Hackathon Instructions

  1. Attending the hackathon is at no cost. Registration though is required (see below), as we would need to know the use-case and datasets you would like to use, in order for us to set up an environment custom made for you. If you are not sure at the moment what your use-case topic will be, make sure to sign up anyway and as soon as you know your topic and have decided on the dataset that you’ll use, reach out to us and we’ll set up your environment (deadline: Sep 21).

  2. To choose a use-case topic, first review the information about the GDELT project (this is a tough use-case because of the number of concepts/nodes involved).

  3. If you want a different challenge, choose a Public Dataset from BigQuery. To do so, you need to use your Gmail account to log into your Google Cloud account. There you’ll see a number of datasets under the bigquery-public-data header.

  4. You will want to choose one that will be interesting to explore and analyze by creating a knowledge graph. This includes deciding what interesting questions you want to answer that are suitable to be answered by an ontology based knowledge graph and may be very difficult to answer with a standard SQL query (when you query the SQL knowledge graph instead of directly querying the dataset, SQL queries are reduced in size by 90% and there’s no need of Union and Join statements).

  5. After you have registered and chosen your use-case topic and dataset, you will get access to learn, easily and fast, how to use the timbr platform, including all what is required to bring your use-case to life: modeling an ontology, mapping data to the ontology, running queries, creating visualizations and exploring data as a graph.

  6. Once you become familiar with the timbr platform and feel ready to start building your use-case, let us know and we will supply you with credentials to the timbr platform and a custom-made environment that’s ready to go, so that you can start performing your magic.

  7. What will earn you the prize?
    BigQuery is designed to deliver insights of big data in standard SQL. But SQL queries that answer questions requiring multiple Join and Union statements (common for the exploration of relationships) can become very, very complicated. The timbr SQL Knowledge Graph is designed to easily handle this challenge, to explore and query databases as a web of relationships to deliver unique insights. The best use-case of our hackathon will make use of these capabilities to deliver the most interesting, useful and unique insights from the selected BigQuery dataset.

If at any stage you require assistance with the platform, we will be glad to assist you. Please note that we will not be available on weekends as well as on September 28th.

Use Case Examples

  • Click here to see an example of what we at timbr have created using BigQuery datasets.
  • Click here to see what people/companies are doing with GDELT.


  • Registration ends and Hackathon begins: September 21st, 2020
  • Hackathon submission deadline (3 min video and 2-3 paragraph summary with links): October 1st, 23:59 Hawaii time
  • Hackathon final event (recap of the track and prize announcements):October 5th, 16:00 CEST, 2020

Quick Facts


  • Participating in the Hackathon is free. Registration is required, though: