Sunday, August 4, 2024

Update on Dynamic AI Agent Workflow.

I haven't really touched the core of the program lately. I don't see much that could improve it.  I am looking at adding units to the agent inputs and outputs. Also make the optional inputs function correctly with a default value to apply if there is no input given. And I am going to sandbox the proc agents with a few techniques.  I have a way of signing agents and limiting what unsigned agents can call. The first try on all that should be completed by the end of the week. I have demos of each done already.

Lately I have just been working through the issues with my agents to write the books correctly. The chapter writing agent was refusing to write some chapters and the agent handling foreshadowing was truncating the chapters.  I am increasing some of the values in ollama to try to get longer output from the llama 3.1 8b model. Namely the context and the num_predict.

I found all the options for the ollama request:

{

  "model": "llama3",

  "prompt": "Why is the sky blue?",

  "stream": false,

  "options": {

    "num_keep": 5,

    "seed": 42,

    "num_predict": 100,

    "top_k": 20,

    "top_p": 0.9,

    "min_p": 0.0,

    "tfs_z": 0.5,

    "typical_p": 0.7,

    "repeat_last_n": 33,

    "temperature": 0.8,

    "repeat_penalty": 1.2,

    "presence_penalty": 1.5,

    "frequency_penalty": 1.0,

    "mirostat": 1,

    "mirostat_tau": 0.8,

    "mirostat_eta": 0.6,

    "penalize_newline": true,

    "stop": ["\n", "user:"],

    "numa": false,

    "num_ctx": 1024,

    "num_batch": 2,

    "num_gpu": 1,

    "main_gpu": 0,

    "low_vram": false,

    "f16_kv": true,

    "vocab_only": false,

    "use_mmap": true,

    "use_mlock": false,

    "num_thread": 8

  }

}

 

No comments:

Post a Comment