8.4 KiB
Avacon Data Science project: Chat with your data application
Code for a self-implemented web app for my application at Avacon Netz for the role of Data Scientist. This is a "chat with your data app" using generated data of customers and their gas and electricity meter readings.
Einleitung
Ich hoffe, dass ich mit diesem kleinen Projekt meiner Begeisterung und Neugier für die Stelle noch einmal Nachdruck verleihen kann, aber auch einen Einblick in meinen "Coding Style" und meine technischen Fähigkeiten geben kann.
Ich habe das Thema "Chat with your data" ausgewählt, weil ich es passend für den Themenbereich Daten- und Plattformmanagement fand. Eine einfache Schnittstelle in natürlicher Sprache, mit der auch Kolleginnen und Kollegen ohne SQL-Programmierkenntnisse oder komplexe Benutzeroberflächen relevante Daten abrufen könen, ist sicherlich sehr hilfreich und kann die Effizienz und Nutzung von Daten-Tools im Unternehmen verbessern.
Ich habe diese App in Gänze selbst implementiert, von der generation des Datensatzes aus öffentlich verfügbaren Quellen bis hin zum finalen Deployment auf der Microsoft Azure Cloud. Alle Details sind in diesem Repository enthalten.
Die Applikation kann unter folgendem Link abgerufen werden: Link
Benutzername und Passwort lasse ich über die Bewerbungsunterlagen zukommen. Gleiches gilt für die Umgebungs-Variablen, die benötigt werden, um den Code auch lokal laufen zu lassen.
Running the Application
The app is deployed on an azure App Service instance (see Link above), but can also be run locally. To do this, several environment variables need to be set in order for APIs and authentication to work properly. These will be sent via my application documents.
To run this app, install poetry (see the official documentation for details). Then, simply run the following commands in a shell of your choice:
poetry install
poetry shell
Now, you should be in a shell with all the required packages installed. This code connects to Azure SQL using pyodbc and therefore, the Microsoft ODBC driver (version 18) must be installed. To do this, follow the official documentation again. Once this is done, make sure the mentioned environment variables have been exported and then run:
cd app
gunicorn app:server -b 0.0.0.0:8000
The server should then start and can be accessed via browser at 0.0.0.0:8000.
General Structure of the app:
The main structure of the Dash app starts with a text input field, where the question prompt can be inserted. Once the submit button is klicked, the user message, together with a long and somewhat optimized system prompt, is sent via the OpenAI API to a generic GPT-4o model.
The model is prompted to give back its answer as a JSON-encoded string. It includes a summary in natural language and a SQL query. The query is run on the Azure SQL Database using pyodbc. The summary as well as the query itself are shown in an output text field to the user. The query result is read into a pandas dataframe, which is then displayed as an interactive table.
Below this main section, there is a "control field", which can be used to manually input SQL queries for comparison. It is also possible to copy/paste the SQL output of the model into this field to check its result.
The questions that can be asked of course depend on the data, which is described in detail in the following sections. Additionally, some example prompts are provided in the web application directly.
Data sources
In this application, the data was randomly generated and has been uploaded into an Azure SQL Database for you already. In order to be transparent about how this was done, the scripts and files are included in this repository.
The scripts for general preprocessing as well as database interaction are both located in the data_preparation directory. The raw data and also the preprocessed data file that has ultimately been uploaded to the database are found in the data directory.
All sources for this data are publically available. Here is a list of the resources used for the different information content:
- German surnames: Most frequent German Surnames from Wiktionary
- German given names: Most frequent male and female given names in Germany from Wiktionary
- Street names: These are street names from the hanseatic city of Rostock, made available as open data here
- Zip codes: from opendatasoft
- Additional information for each zip, such as city name, longitude, latitude etc. using this public API
- Rough bounding box information for Avacon Netz service area: netzgebiete.avacon.de
Data structure
The above data is used to randomly generate a user-specified number of customers. Currently, a number of 1000 customers were generated. Customer information includes:
- Given name and surname
- Street name, house number, zip code and city
- Two meter IDs per customer: one for a natural gas meter, one for an electricity meter
- Each customer has between 1 and 10 (also chosen randomly) meter readings, which include:
- The date at which the reading was obtained
- The value that was read from the meter
- For simplicity, I assumed that both electricity and gas meter readings are always occurring in pairs (i.e. there is no customer that just reads electricity meter values or just natural gas meter values)
The customers, meters and address data are generated and uploaded to the SQL database. The ERD of the database looks like this:
Customers have a first name and a last name and reference other tables only by gas and electricity meter IDs. I preferred this to addresses because there are multiple households (and meters) at one address so meter IDs seemed the more natural choice.
Meters have a signature, which also works like an ID. It is a string in the format W.XXX.YYY.Z Where W, X, Y and Z are digits from 1 to 9. The MeterType has the value GAS for gas meters and ELT for electricity meters. Each Meter is located at a certain address and is therefore linked to the Addresses table by an AddressID.
The Addresses table contains street name, house number, city, zip and geo information.
Finally, the Readings table stores the data of the meter values read by the customers. Each reading is done by a unique customer from a unique meter and contains the date and the value that was read off the meter.
Cloud Infrastructure
The Infrastructure is best described by the image below:
The App Service needs several secrets that it receives from an Azure Key Vault by authentication via role-based access control (RBAC) as a System Managed Identity. It also authenticates via the same method with the Azure SQL Server and Database. Using the secrets provided by the key vault, the App Service can authenticate users and query the AzureOpenAI resource using the API key, as well as connect to the SQL database to run queries.
Note that due to too strict rate limits, a regular OpenAI connection is used rather than an azure OpenAI instance, but the principle is the same.
