Repository files navigation
Install OLLAMA from it WebSite: https://ollama.com/
Run a OLLAMA model: $ ollama run llama3.2
Install Python3
MacOS: already installed
Windows: download it from the site: https://www.python.org/downloads/
Install pip3
MacOS: already installed
Windows:
> curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
> python get-pip.py
Install+Generate+Activate pyenv:
create your local venv:
Windows:
CMD: > python -m venv venv
GitBash: $ python -m venv venv
MacOS: $ python3 -m venv venv
activate your local venv:
MacOS: $ source venv/bin/activate
Windows:
GitBash: > source venv/Scripts/activate
CMD: > venv\Scripts\activate
install depnendencies:
$ brew install cmake apache-arrow
$ pip install -r requirements.txt
Run StreamLit: $ streamlit run app.py
PS.:
before running the streamlit:app on PyCharm it is necessary to run the command directly on the Terminal
on the terminal input/insert your email to sign on it
PyCharm Debug :
create a config run
if it raise any error on run/debug just delete and create a new run file:
set it to module instead of script
module value = streamlit
script parameters = run app.py
IF errors like: "asyncio\base_events.py:182" or "Python\Python312\Lib\asyncio\events.py", line 88" you must disable python.debug.asyncio.repl:
click on seach icon (top right near to close button X)
type on the search input: "registry" and click on "registry..."
look for python.debug.asyncio.repl and uncheck the checkbox
experiments : some backup code for reference future investigations
src : the application source code for long run
app.py : the application entry point
About
StreamLit LLM Chat
Resources
Stars
Watchers
Forks
You can’t perform that action at this time.