Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey all - InvokeAI maintainer here. A few folks mentioned us in other comments, so posting a few steps to try out this model locally.

Our Repo: https://github.com/invoke-ai/InvokeAI

You will need one of the following:

    An NVIDIA-based graphics card with 4 GB or more VRAM memory.
    An Apple computer with an M1 chip.
Installation Instructions: https://invoke-ai.github.io/InvokeAI/installation/

Download the model from Huggingface, add it through our Model Mgmt UI, and then start prompting.

Discord: https://discord.gg/invokeai-the-stable-diffusion-toolkit-102...

Also, will plug we're actively looking for people who want to contribute to our project! Hope you enjoy using the tool.



Any chance of supporting Intel ark GPUs?


Won't say "never!" - Just seems NVidia has a stranglehold on the AI space w/ CUDA.

We're mainly waiting on others in the space (And/or increase investment by Intel/AMD) to offer support more broadly.

At this rate, I'd give Apple a likely shot of having better support than them w/ the neural engine & CoreML work they've been releasing.


Out of curiosity, will M2s work out of the box?


Ought to! There are some enhancements coming down the pipe for Macs w/ CoreML, so while they won't be as fast as having a higher end NVidia, they'll continue to get performance increases, as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: