I’ve always loved words and reading. In fact, some of my earliest memories are of wanting to read but not having the words — once when I was about three and wanting to read a comic in Sinhala and a couple of years later, discovering English comics (TinTin and Marvel comics on two separate occasions) and not knowing English.
I have been developing software for over thirty years and for around 20+ of those years I have been involved in open source and free software. In fact, for most of my software development career, I’ve released software for free because I believe that if it is something I like and find useful, then it might help others too.
At this point in time, there are a lot of Stable Diffusion codebases and GUIs out there. But, as far as I know at least, there aren’t a lot of GUIs for macOS. In fact, getting Stable Diffusion working on macOS is itself problematic for two reasons:
- On Intel macs there isn’t any GPU support
- On the Apple Silicon side, while there is some support for MPS (Metal Performance Shaders) the support is still nascent and is a little buggy.
As I’ve written previously, I’ve been working on a GUI for Stable Diffusion since I prefer working with a simple UI that allows me to do everything in one place. The trouble with that is that I don’t get as much time for doing other Stable Diffusion development since I only get to do any of this work in my spare time and on weekends.
I previously talked about how I had been working with Stable Diffusion and that I also was creating a GUI for the image generator. Well, I re-wrote the GUI again since I wasn’t happy with how the previous version worked 😛
The thing that annoyed me the most was that wxPython sliders had no way to display floating point values.
I’ve loved technology for the longest time. I can’t put an exact date to it, but the earliest recollection I have is that I wanted to work with computers when I was in grade nine … that would have been when I was about fifteen?
I figured out a more or less functional set up for running Stable Diffusion on my Apple M1 MacBook Pro a couple of days ago. Since then, I’ve mostly done Stable Diffusion tasks on my local machine instead of doing things on Google Colab (or Amazon SageMaker Studio Lab).
I’ve been writing a lot about Stable Diffusion (and image generation using AI) lately. That’s because providing a text prompt, getting an AI generated image back after a few seconds, and then improving upon your prompt to get the exact image you want can be so very addictive 🙂
But there’s a flip-side to the coin — those who can’t run AI systems like Stable Diffusion because their machines don’t support it.
When I heard about Stable Diffusion, the first thing I wanted to do was install it on my Apple Silicon MacBook and try it out. Unfortunately, at that point, I couldn’t do so because Stable Diffusion required GPU support and there was no complete Metal GPU support at that point for PyTorch that you needed in order to test out Stable Diffusion.
It’s been a pretty interesting few weeks for those interested in AI image generation. Things are happening fast and it might even be a bit hard to keep up with all that is going on. First, there was the arrival of Stable Diffusion.