I made my web interface before I'd even heard of Ollama, and because I wanted a PAYG interface for GPT-4.
You also don't need to actually install my web UI, as it runs from the github page and the endpoint and API key are both configurable by the user during a chat session.
Also (a) the ollama command line interface is good enough for what I actually want, (b) my actual problem was not realising I'd only installed the python and not the underlying model.
You also don't need to actually install my web UI, as it runs from the github page and the endpoint and API key are both configurable by the user during a chat session.
Also (a) the ollama command line interface is good enough for what I actually want, (b) my actual problem was not realising I'd only installed the python and not the underlying model.