Fixed SQL code formatting errors by:
- catching both single and double backslashes in the formatting
- explicitly telling LLM how to format linebreaks
Also did some changes to the UI and allowed general questions
about the database content to be asked.
Using respective credentials for both local development as well as
deployment. When deployed on azure, the app authenticates with the SQL
database via Entra ID (formerly active directory) and accesses other
credentials via key vault as a system managed identity.
The docker file was updated to align with the software
requirements of the app. An additional script `install_odbc.sh` is added
to install the required microsoft ODBC driver.
Before, there was no direct description about the usage of the manual
SQL query input field. A plotly dash component was added with a more
precise description for the user.
Add the first working code logic both in terms of backend and
frontend-related tasks. Add a detailled system message for improved
results. Add several UI improvements for result display and user
information. Add text input field for direct SQL code comparison.
The implementation of the openAI backend had to be changed due to strict
rate limits of azure OpenAI free tier and was replaced with a regular
openai API key.
In order to compare the (not yet implemented) SQL query generated by
the LLM with an actual query, another text field was added that parses
the query to `pyodbc`, which connects to our database, stores the
resulting rows in a `pandas` dataframe and then visualizes it as a table
in plotly dash.
The SQL functionalities are implemented in the `sql_utils.py` module.
Additionally, some minor updates to the overall behavior and layout of
the app were implemented.
Includes the first version of a rudimentary chat app, still without the
SQL capabilities that we want later. For now, we can connect to the
Azure OpenAI source and then have the response displayed in a plotly
dash webapp.
Some styling and UI elements were also added, such as logos. UI
components are designed that the user cannot enter the same query twice
and cannot click the submit button as long as the query is running.
Added detailled information about data sources used for customer data
generation as well as the structure of the SQL database. Also included
an Entity Relationship Diagram (ERD) for effective visualization of the
database structure.
The script `insert_sql.py` uses `pyodbc` to connect to the Azure SQL
database, loads the data from the preprocessed `customers.json` file,
formats them and then inserts them into the created table schema.
Since we are working with an Azure SQL database, we need to fill the
generated customer data in a fitting schema. The schema will be
described in more detail in an updated README file later.
The added script uses `pyodbc` to connect to the database and create the
tables. This requires a connection string, which will not be checked out
to this repo for security reasons and must be obtained separately.
Additionally, a script `test_sql_connection.py` is added with this
commit, which is a simple utility to test the `pyodbc` connection.
Add `data_preparation/generate_customers.py`, a script that takes the
`base_data.json` file generated by `get_base_data.py` and randomly
samples a given number of customers.
To simplify things, each customer is assigned exactly one gas and one
electricity meter and each of them is read between 1 and 10 times.
The full data including meters, meter readings and dates as well as
customers and addresses is stored in a final JSON file named
`customers.json`.
The script `get_base_data` takes the raw datafiles (such as `names.txt`)
and formats them in a common JSON file, which can be later used to
randomly generate customer and meter readings data.
Additionally, the script filters all eligible zip codes an approximate
avacon netz service area and provides some additional information for
them.
An example output file, `base_data.json` has been added to the repo in
a previous commit.
Usually, one would not check out the actual data files, but store them
elsewhere (such as in Azure Blob Storage). In this case, it is still
convenient for external reviewers to get an idea of the structure of
the data.
Since the files in total are less than 2MB, this is acceptable for this
specific case.
Setting up python project files using poetry. A basic environment
is installed including dash for the app that will be implemented
later.
Also contains several dev tools, including pre-commit hooks.
Co-authored-by: Tobias Quadfasel <tobias.loesche@studium.uni-hamburg.de>
Reviewed-on: #1