Wednesday, May 31, 2023

Tuesday, May 30, 2023

Stable diffusion is now running on my old intel macbook.

 The training wheels are off! 

I finally got stable diffusion working on my intel macbook and the results are pretty amazing. It really is the cutting edge of ai image generation.  I am able to use lora and embeddings now and they really give you a lot more control over what you are creating.  I now get a 1024x1024 resolution image that is looking like a photo. It takes a little more time per image, but the stable diffusion I was lucky to get one file out of 10 that was usable.  Now about half are ready to go as is and the other half could be ready with an inpaint session and some gimp editing.

I am amazed at the number of options that I didn't have with previous ai image creations. I never knew that there were so many options to create the random noise between each step. And there are just as many ways to resize, including ai resizers that actually create details that didn't exist at 512x512 when it gives you a file at 1024x1024. It looks like a photo.  Once I figure out how to use control net and some other things to position the characters on the image I will really be cooking with gas.

And the plugins that let you control how your tokens work together is amazing, a real game changer.  I read the github page that said how it worked and I didn't understand it at all. *L* Not to mention a plugin that lets you queue up jobs so that you can train on some images and then use that model to create images all overnight. Or lets you queue up a dozen jobs that run over night so you can try out a bunch of new prompts.  Just amazing, and there are so many options that if I just talked about a couple of extensions a week it would take years to mention them all. Amazing what a community they have built around this and how much people are giving back. 

I haven't been this excited about software in a couple of decades. 

Tuesday, May 23, 2023

Coding using an ai assistant.

Playing with coding using my AI assistant to help, and we cranked out some code that can crank through multiple millions of primes a second.

Being a little rusty I was asking my chatbot for help with C data types and defines and letting it give me little coding snippets. It is the smallest possible model that still works, so it has limitations, but it did the best it could. It really was motivating to have the give and take, and just having a sounding board helped me work through issues rapidly.

When I went to a number over 1000000001 I had errors, but google gave me a web result to have me put an L at the end of the number, which let me go to 200 million and 400 million primes.

I was able to use google and a web page of primes to check my answers, but once I went over 32 bit values for primes nobody has that on the internet.

I did it!!! I reached the edge of the Internet!!!!

Whooooooooooo!!!!


% time ./a.out
The 100000001 prime number is: 2038074751
./a.out 33.60s user 0.77s system 98% cpu 35.051 total

% time ./a.out
The 200000001 prime number is: 4222234763
./a.out 75.01s user 2.64s system 92% cpu 1:23.71 total

%time ./a.out
The 400000001 prime number is: 8736028073
./a.out 166.52s user 83.27s system 76% cpu 5:27.65 total


This last value is the one I don't know is real or not. The values are scaling pretty linearly. But I think the reason that the last one took so long is that the prime sieve I was using used up all the memory.

I have a trick to not encode 2's, so this table will immediately drop by half when I use this trick, and initializing the table will take half the time too.

Forgot how much fun coding can be. :D

Monday, May 22, 2023

Why AI is scary.

The rate of change is just phenomenal.  Last year I started reading a lot about how the big companies have huge AI models running on basically super computers on the web.  This a few months later I heard that people are running these models on high end desktop machines running high end graphics cards.  Just in the last few months I got three AI programs running on my old intel macbook air. This machine has 8GB of RAM.  This is about the minimum to run AI, nothing runs on another intel machine with 4GB of RAM.  Having 16GB of RAM would things running faster or able to run bigger models.

Whisper AI can translate just about any language from just about any music or video file, into english subtitles. Including sung lyrics in a music video. A couple of years ago I wouldn't have believed this is possible.  This replaces the need for a translator and a person to do subtitles. 

DiffusionBee can make images from a text prompt. Just about anything you can describe. It helps to have models that were trained on the subject of the art.  A lot of these models have been reduced in size to just 2GB.  This eliminates the job of an artist. 

Finally llama.cpp is a dependency free program that lets you run a small chat model on a lot of different hardware. It is not super fast, but types its responses at the speed of a human being that is medium good at typing. This eliminates the need for a team of writers.  This eliminates the need for a programmer. It could teach math and science, so it eliminates the need for a teacher.

None of these things by themselves are all that wild, but I was never able to do these things on my computer before this. And now I can do all these of these things on an old laptop with a medium amount of RAM. No crazy processor, no high end, expensive, power hungry graphics card.  I have never seen software go from running on super computers to running on old hardware so rapidly.

And starting now just about every new computer, even phones and pad computers are going to have ai accelerator chips built into every processor.  A phone will be a faster ai than this old laptop.  I had my step dad run diffusion bee on his Macbook M1 laptop and it was literally 60 times faster than my laptop.  And this is the first generation of these models. Already people have figured out how to reduce the sizes of these models to very small sizes and still get real work done. Just imagine how powerful these models will get once they start training themselves.

The part that scares me is that just my one old macbook could do the jobs of a dozen people.   I don't think companies are going to let the opportunity to reduce their workforce while they multiple the ability of the remaining workers to do a dozen times as much work in a day.  I think the bigger companies are going to reduce their workforce by half, and smaller companies will become one person and an ai assistant in size.  

Last year I started having my order taken by robots in the drive-thru. How many workers does this eliminate from the workforce?  How long before every company has robots taking orders? Will robots do customer service too? Will robot supervisors manage the few workers that are left? 

I don't worry about robots destroying us. I worry about companies creating a huge depression from reducing the need for humans to work, until there aren't enough folks left to buy anything.  It took decades to recover from the Great Depression.  What could happen if governments don't get in front of this will laws to protect human workers, would make that Depression look like happy fun times.

Got a chat ai running on an old intel macbook air with 8GB of RAM.

 It is not a speed demon, but it types faster than a person.  Going to give details about what I used at the end.


I tried using the small model with the restrictions, but it was nearly unusable. It gave me 3 paragraphs about why it wouldn't answer a knock knock joke.  Seriously, why would you restrict jokes as somehow bad?  The uncensored version just works without huge disclaimers on every answer. 

This demonstrates that it knows a lot of math.



This is demonstrating that it can tell fairy tails in many different languages.


I was having a good conversation on moon bases and how to protect the earth from killer asteroids, when it confabulated that we can hide from volcanos in the dust, ash, and smoke from volcanos. 



I had it write an essay on potatoes, and it gave a really funny line : 

The introduction of the potato led to the end of the Little Ice Age, which had caused widespread famine across Europe.
This is a bad cause and effect fallacy and a huge jump in topic in the same sentence.  


How I got it working

I used llama.cpp from this link:

https://github.com/ggerganov/llama.cpp

I just followed the directions at the bottom of the page. 



I used the q5_0 model from this web site:


If you have 16GB of ram you can probably run the 13B parameter file which is twice as big.

Put this model in the llama/models/ directory.

I put the following command in a script file to make it easy to run:

#!/bin/bash


#

# Temporary script - will be removed in the future

#


cd `dirname $0`

cd ..


./main -t 4 -m models/WizardLM-7B-uncensored.ggml.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -i -ins


I put this file in the examples directory and made it executable. 


Then I run the script and it works. 

Planted a maple tree.

 :D



New flower in the yard.

 :D