Skip to content

Latest commit

 

History

History
68 lines (46 loc) · 1.74 KB

File metadata and controls

68 lines (46 loc) · 1.74 KB

Introduction 🚀

This is my project for the Alkemy Data Analytics + Python Challenge.

The program consumes data from 3 different sources to fill a SQL database with cultural information on Argentine libraries, museums and movie theaters.

Setup 🛠

For Windows

If you are using Windows PowerShell, you need to open a PS console as Administrator and run:

Set-ExecutionPolicy Unrestricted

Open a terminal inside the project folder and run the following commands to create a virtual enviroment, activate it, and install the required dependencies:

python -m venv venv
venv/Scripts/Activate.ps1
pip install -r requirements.txt

For Linux

Open a terminal inside the project folder and run the following commands to create a virtual enviroment, activate it, and install the required dependencies:

python -m venv venv
source venv/Scripts/activate
pip install -r requirements.txt

For both Windows and Linux

Now connect to PostgreSQL with the command:

psql postgres postgres

If ask you for a password for the first time, pass it something like 'postgres' or 'admin'. Otherwise use the password you setup for postgres user.

Once you are connected, create an user:

CREATE USER alkemy with encrypted password 'alkemy';

And allow him to create databases:

ALTER USER alkemy CREATEDB;

How to run ▶

That's all! You can now run the program by executing challenge.py file:

python src/challenge.py

The .csv files will be downloaded by default in the folder 'data'.

The .sql file will be generated by default in the folder 'sql'.

The .log file will be generated by default in the folder 'logs'.

You can change this paths and other settings in the .env file.