Monday, July 31, 2023

Fix the worlds electrical systems.

 

There are four main areas that need fixed. 

1. The main grid.

   The main grid needs to be beefed up to carry power from areas that have a surplus to areas that are lacking power.  We need to switch to super conductors in downtown metro areas for efficiency.

   There is also the fact that our electric grid is very vulnerable to emp and solar flares.  We need a method of disconnecting the wires of the main grid from the big transformers to protect them from these effects.  

    We also need backups of the big transformers in distributed warehouses ready to be deployed in hours to fix a burned out system.  Right now it would take years to build and deliver these transformers if they were damaged by a catastrophic event. 

2. Upgrade houses to be smart homes.

   Appliances should be smart and connect to the houses control unit.  This control unit should schedule when things run to reduce the demand that is being placed on the electrical grid.  So the fridge doesn't run at the same time as the ac unit.  The control unit should limit the draw of the entire house to no more than 20 amps.  Battery packs should be charged at night, or charged from the home grid in the day.  If electrical power is used to heat water, this should be done at night.  If done correctly this staging and running of appliances could reduce power needs of households in America by a factor of 5.

3. Switch to solar power, wind power, and water power.

    If your house needs more than 20 amp service, or when the main grid is out then you should be able to switch to batteries at your own home that are charged with locally produced solar, wind, and water power.  A 10KW solar panel on every's roof with a wind turbine for windy conditions would be enough to charge these batteries, come wind or shine.

    And here is where the beefed up grid from step one comes into play.  When the main grid senses a lack of power in one area it should signal other areas and they put power from their batteries or from their solar/wind/water power sources back onto the grid.  Imagine west coast helping out the east coast in the morning and then the east coast helping out the west coast later as the heat peaks in the west.

   The power system could keep track of home much power you add to the grid and how much you take from the grid and charge you for the power you used minus the power you added.  Plus a flat rate to maintain the power grid. 

4. Stop using carbon to heat water.  

    Heating water is 1/6 the energy use in every single home in America.  Switch to solar water heaters to heat water.  You could build tubing into the back of every solar panel and use that to both cool the panel and heat antifreeze. This warm antifreeze could then be put through a small bank of high temp vacuum tubes with reflectors to focus the heat on nickel coated copper tubes to get the temps up to nearly boiling.  Then run this heated antifreeze through tubes in a 500 gallon water tank to store the heat.  A small solar panel could run the pumps for this system, the intensity of the sun on the solar panel being all that is needed to regulate the speed of the liquid through the system.  Heavy bags of paraffine that each have a different melting point could be used to store heat in their phase change of switching between solid and liquid states. Like heat storing batteries. 

    If you need to use electrical power to heat the water, then would only be allowed to happen at night, when there is a surplus.  You could use your own electrical power anytime during the day, say after you reach full charge on the batteries.  Any wind power needs a   The houses controller could mange this to keep the water up to temp while using a minimum of grid power.

  This tank could be used for hot water for the sinks and shower. It could also be used to heat a loop of antifreeze that runs under the floors of the perimeter of the house and holds most of the cold at bay.  A loop of this heated water could be put under the floor in the bathroom for toasty warm floors all the time there. 


Tuesday, July 25, 2023

How to build a neural net from scratch in python.

 Found a very good video with sample code.  Very easy to understand how this works.  Only uses the numpy linear array library.  


https://www.youtube.com/watch?v=w8yWXqWQYmU

https://www.kaggle.com/code/wwsalmon/simple-mnist-nn-from-scratch-numpy-no-tf-keras/notebook


Here is a screen shot of one of the test runs and some code.




Weekly ai update


This week I upgraded to the uncensored 7b version of the newly released Llama 2 chatbot ai model.

https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML/tree/main

I run it with llama.cpp :

https://github.com/ggerganov/llama.cpp

I finished putting thumbnails on all my lora and, ti, and checkpoints. I actually found 2 hypernetworks and tried both of them out yesterday and today. gothicRPG_ArtStyle_v1 and IncaseStyle_AnythingV3, but I used it with Anything V5.

Tried out sd upscale but it didn't work for me. Downloaded and installed the Ultimate SD upscale to try it out.

I installed a couple of extensions to automatic1111 that control the aspect ration better. I found an extension that shows the images and and lets you look through everything you have generated easily called, "infinite image browsing."

Found the smallest ai program that uses modern techniques that can run fast on very small computers. I could see just about everyone releasing intelligent devices with $5 computers installed that can talk and listen and tell stories. Crazy how fast ai is moving to the slowest and cheapest computers. It is being called baby llama.

https://github.com/karpathy/llama2.c

The code that performs the magic is less than 500 lines. One of the really cool pieces of code that I had not seen before is the following code before a for loop:

#pragma omp parallel for

This is clever and elegant because if you don't give the flag and library at compile time, the compiler just ignores it, allowing your code to be really simple. But if you compile it with the flags -fopenmp -lomp then those for loops are magically multithreaded and run many times faster than they would otherwise. Planning on rebuilding a bunch of my old code and see how much faster I can make it.

Tiny c based ai program

 Can run fast on just about any hardware above an 8 bit microcontroller.  Amazing how fast this ai stuff is going to the smallest levels. Easy to understand.  Imagine a little robot teddy bear that whose mouth moved as it told stories and talked to a kid. And never told the same story twice. Imagine it could teach the alphabet one letter a day by telling stories that feature characters whose names started with that letter.  A is for ant, b is for bumble bee, c is for cat, d is for dog, and so on. 

Astounding.


https://github.com/karpathy/llama2.c

sd upscale in the img2img tab

 Stable diffusion has a really cool tool called sd upscale that will rescale an image using an ai, then breaks an image up into tiles and runs img2img on those smaller tiles.  This allows you to upscale well beyond any limits you have in memory. The only tradeoff is that each time you double the size of an image, it takes 4 times longer to double another time.

You can find this option by going to the img2img tab in automatic1111, click on scripts at the bottom of the page and select sd upscale.  

You want to set the cfg scale and sampling steps high, and set the denoise strength low. Change your prompt to just "highly detailed."  Play with this to get the effect you want.  

By setting the denise strength a little higher and using a prompt like leaf for an image of leaves you can create an image of a leaf made up of tiles of little leaves. 


Monday, July 24, 2023

Found the deep fake extension.

 I found the roop deepfake extension in stable diffusion.  It works so great that it is astounding.  This is me as a dwarf, an elf, a hobbit, a barbarian, and as an old man. 










Monday, July 10, 2023

This is from an ai model trained on thousands of jwst images.



Playing around with Stable Diffusion XL on a web site.

 That is the next generation stable diffusion that is going to take serious hardware to run.

The uncrop tool is very cool.






Tuesday, July 4, 2023

What is a Lora

I am writing these articles because I just learned them myself.  

A lora is an ai model that was trained on top of a huge general purpose model.  It is specialized for a limited subject area. It stands for low rank adaptation.  This area could be an art style, a character, clothing, or a location/setting.  And you can use a few of these lora at the same time.

The big model was created using tens of thousand of enterprise class high RAM GPUs.  This can only be done at great expense, expertise, and it takes millions of hours of time spread across all these machines.  But that work is now done and doesn't need to be repeated.  These general purpose models are awesome, they know a little bit about almost everything.  But they don't know a lot about any one thing.  

This is where lora comes in.  As an individual or small company you might need to make images that the larger model doesn't know much about.  Let's say you are making images for a game you are creating and need to make an image of a beholder.  Now the big ai model might have an idea of a beholder because it ran across a couple of images in the millions of files it was trained on.  But it doesn't make very good beholders.  So on your home computer you can collect up all the images of beholders you can find, convert them to a standard size, tag them with text labels, and then train the new model using stable diffusion in a few hours. Then by invoking the lora in your stable diffusion prompt along with a big main model you can place beholders in all sorts of new situations.  You can use as few as a dozen images, but a few hundred is better.

Because it didn't take much effort to create one of these lora people share them with each other. https://civital.com is one of the main sites this happens on.  Word of warning, that site has a lot of adult content, but it is heavily censored by default, so nobody is forcing you to look at the pictures. 

To use a lora from this site, download it, and put it in stable-diffusion-main/models/Lora

At this point you can also take a couple of minutes to create an information file next to the lora.  Create a file called lora-name.description.txt next to the lora. The lora-name isn't literal, it should match the name of the actual lora, except for the .safetensor ending.  In this file, cut and past the keywords used to activate the lora and maybe an example prompt or two from the web site. People post their own examples and the positive and negative prompts they used to create the image. 

To use the lora now that it is installed in the right place do the following.  If you click on the little show/hide extra networks button under the big generate button it will show you everything you have installed on your machine.  This will open up a new extra networks section in the webui.  There is a reload button in that section so you can see the ones you just put in. You just have to click on the lora icon to include it in your prompt.   The text file you created and populated is visible at the bottom of the icon for the file.

These is even a civitai helper that will download thumbnails and the training info used to create a model. There is another plug in that is supposed to manage the keywords for you, but I haven't figured that out yet.

Here is an example. This image was created with a 36MB beholder lora, the stable diffusion prompt is there.

https://civitai.com/images/1256610?modelVersionId=102225&prioritizedUserIds=935296&period=AllTime&sort=Most+Reactions&limit=20

The bit inside the angle bracket is the lora, <lora:beholder_v1-000092:1>. It has lora, there are other networks too, so you have to tell what kind this is, then the name, then : strength.  Sometimes you have to change the :1 to weaker, :0.8, or stronger, :1.2, for the best effect.  the word beholder is the new keyword that the lora has been trained to respond to.

I hope I have been clear.  This is everything I know about loras right now.


Monday, July 3, 2023

Automatic1111 has automation by default.

 At the bottom of the text 2 image tab is a section called script.  This is a dropdown with these options:

  • None.
  • Prompt matrix.
  • Prompts from file or text box.
  • X/Y/Z Plot.
  • One Button Prompt.
  • Controlnet m2m.

None is the default. It turns this section off.

--

Prompt matrix.  You can create a list of options inside a set of angle brackets separated with pipe symbols and get an image for each option.  

A male (<dwarf | elf | human>) warrior.

Would generate 3 images.

A (< male | female>)  (<dwarf | elf | human>) warrior

would generate 6 images. 

A (< male | female>)  (<dwarf | elf | human>) (<warrior | cleric | thief | wizard>) 

would generate 24 images.

--

Prompts from file or text box lets you feed in a list of newline separated prompts which each get ran.

--

X/Y/Z plot lets you pick three options and lets you vary these with each execution, then gives you a nicely formatted and labelled set of images that will let you learn what varying these options do.  And the best result is already generated for you in the output file.  

You can give a range for it, 5-7 which would give 5, 6, 7 on an axis. you can say 5-6 (+.25) to go from 5 to 6 in quarter steps, or give 2-10 [2] to go in steps of 2.  

The models and diffusers are added to a list with mouse clicks. Just put the ones you want to compare in the plot in your list.

--

One button prompt generates an image at random. Kind of a dealer's choice.

--

Not sure what the last option does yet.