林望儒劉育庭
Published

AI MasterDream copilot: AMD Enhanced Artistic Rendering Sys

Develop an artistic drawing optimization system based on generative AI called AI MasterDream copilot.

139
AI MasterDream copilot: AMD Enhanced Artistic Rendering Sys

Things used in this project

Hardware components

Arduino Nano 33 BLE Sense
Arduino Nano 33 BLE Sense
×1
ESP32
Espressif ESP32
ESP32 and ESP32 expansion board (3.2 TFT SPI 240X320 V1.0)
×1
TFT Touchscreen, 320x240
TFT Touchscreen, 320x240
(SPI, ILI9341)
×1
AMD Radeon Pro W7900 GPU
AMD Radeon Pro W7900 GPU
This professional GPU has 48GB of VRAM allowing for complex content creation while also supporting large LLMs such as Llama 3 locally for fast and private LLM usage. The Radeon PRO indeed caters to professionals, creators, and artists.
×1
ESP32 Camera Module Development Board
M5Stack ESP32 Camera Module Development Board
esp32-CAM
×1
workstation(PC)
×1

Software apps and online services

Arduino IDE
Arduino IDE
Install library:adafruit/Adafruit_ILI9341
python
AMD ROCm™ Software
AMD ROCm™ Software
https://rocm.docs.amd.com/
ZLUDA
LM Studio-rocm
*NOTE:remember to download the rocm version.

Hand tools and fabrication machines

Laser cutter (generic)
Laser cutter (generic)
3D Printer (generic)
3D Printer (generic)

Story

Read more

Custom parts and enclosures

ESP32 Material

ESP32 Material preparation and MicroUSB transmission cable with switch is better.

ESP32 Enclosures

Enclosures, you can make a Laser Cutting by yourself. Remember to delete duplicate lines

a display of completed picture

a display demo of completed picture

Schematics

ESP32 pinout

it's ESP32 Layout Diagram.

ESP32-CAM Pin Assignment Diagram

It's ESP32-CAM. I use it to capture pictures

NANO 33 SENSE REV2 Pin Assignment Diagram

it's NANO 33 SENSE REV2. I use it for computating for edge AI.

ESP32 CAM PINOUT

IT'S ESP32 CAM PINOUT

NANO 33 SENSE REV2 pinout

IT'S NANO 33 SENSE REV2 pinout for Layout Diagram.

Code

generate ptt sample with gas.js

JavaScript
a sample of generating ptts with google app scripts
function createVBASlides() {
  //  JSON 
  var jsonData = [
    {
      "slideTitle": "VBA ",
      "slideContent": [
        "VBA Visual Basic for Applications",
        " Office  ExcelWord ",
        "VBA "
      ],
      "imagePath": "https://source.unsplash.com/960x640/?VBA"
    },
    {
      "slideTitle": "VBA ",
      "slideContent": [
        " Excel VBA ",
        "",
        "VBA "
      ],
      "imagePath": "https://source.unsplash.com/960x640/?VBA,Excel"
    },
    {
      "slideTitle": "VBA ",
      "slideContent": [
        "VBA  Office ",
        " VBA ",
        "VBA "
      ],
      "imagePath": "https://source.unsplash.com/960x640/?programming"
    }
  ];

  //  Google Slides
  var presentation = SlidesApp.create("VBA ");

  //  JSON 
  jsonData.forEach(function(slideData) {
    // 
    var slide = presentation.appendSlide(SlidesApp.PredefinedLayout.TITLE_AND_BODY);

    // 
    slide.getPageElements()[0].asShape().getText().setText(slideData.slideTitle);

    // 
    var content = "";
    slideData.slideContent.forEach(function(point) {
      content += "- " + point + "\n";
    });
    slide.getPageElements()[1].asShape().getText().setText(content);

    // 
    if (slideData.imagePath) {
      slide.insertImage(slideData.imagePath);
    }
  });

  //  Google Slides 
  Logger.log("" + presentation.getUrl());
}

requirements.txt

BatchFile
npm install requirements
npm install 
@arduino/node-red-contrib-arduino-iot-cloud@1.0.10
@mapbox/node-pre-gyp@1.0.11 extraneous
@tensorflow/tfjs-converter@1.2.11 extraneous
@tensorflow/tfjs-core@1.2.11 extraneous
@tensorflow/tfjs-data@1.2.11 extraneous
@tensorflow/tfjs-layers@1.2.11 extraneous
@tensorflow/tfjs-node@1.2.11 extraneous
@tensorflow/tfjs@1.2.11 extraneous
@types/node-fetch@2.6.11 extraneous
@types/offscreencanvas@2019.3.0 extraneous
@types/seedrandom@2.4.27 extraneous
@types/webgl-ext@0.0.30 extraneous
@types/webgl2@0.0.4 extraneous
abbrev@1.1.1 extraneous
adm-zip@0.4.16 extraneous
agent-base@4.3.0 extraneous
ansi-regex@2.1.1 extraneous
aproba@1.2.0 extraneous
are-we-there-yet@1.1.7 extraneous
canvas@2.11.2 extraneous
chebyshev@0.2.1 extraneous
chownr@1.1.4 extraneous
code-point-at@1.1.0 extraneous
color-support@1.1.3 extraneous
console-control-strings@1.1.0 extraneous
decompress-response@4.2.1 extraneous
deep-extend@0.6.0 extraneous
delegates@1.0.0 extraneous
detect-libc@1.0.3 extraneous
emoji-regex@8.0.0 extraneous
es6-promise@4.2.8 extraneous
es6-promisify@5.0.0 extraneous
euclidean@0.0.0 extraneous
face-api.js@0.21.0 extraneous
fs-minipass@1.2.7 extraneous
gauge@2.7.4 extraneous
has-unicode@2.0.1 extraneous
https-proxy-agent@2.2.4 extraneous
ignore-walk@3.0.4 extraneous
ini@1.3.8 extraneous
is-fullwidth-code-point@1.0.0 extraneous
manhattan@1.0.0 extraneous
mimic-response@2.1.0 extraneous
minipass@2.9.0 extraneous
minizlib@1.3.3 extraneous
mkdirp@0.5.6 extraneous
nan@2.20.0 extraneous
needle@2.9.1 extraneous
node-fetch@2.1.2 extraneous
node-pre-gyp@0.13.0 extraneous
node-red-contrib-face-recognition@2.0.4 extraneous
node-red-contrib-line-notify-api@1.0.0
node-red-contrib-moment@5.0.0
node-red-contrib-sunevents@3.1.1
node-red-contrib-web-worldmap-cn@4.8.2
node-red-dashboard@3.6.5
node-red-node-arduino@1.0.0
node-red-node-base64@0.3.0
node-red-node-email@3.0.1
node-red-node-mysql@2.0.0
node-red-node-ping@0.3.3
node-red-node-random@0.4.1
node-red-node-serialport@2.0.2
node-red-node-suncalc@1.1.2
node-red-node-ui-webcam@0.4.0
nopt@4.0.3 extraneous
npm-bundled@1.1.2 extraneous
npm-normalize-package-bin@1.0.1 extraneous
npm-packlist@1.4.8 extraneous
npmlog@4.1.2 extraneous
number-is-nan@1.0.1 extraneous
os-homedir@1.0.2 extraneous
os-tmpdir@1.0.2 extraneous
osenv@0.1.5 extraneous
progress@2.0.3 extraneous
rc@1.2.8 extraneous
rimraf@2.7.1 extraneous
sax@1.4.1 extraneous
seedrandom@2.4.4 extraneous
set-blocking@2.0.0 extraneous
signal-exit@3.0.7 extraneous
simple-concat@1.0.1 extraneous
simple-get@3.1.1 extraneous
string-width@1.0.2 extraneous
strip-ansi@3.0.1 extraneous
strip-json-comments@2.0.1 extraneous
tar@4.4.19 extraneous
tfjs-image-recognition-base@0.6.2 extraneous
tr46@0.0.3 extraneous
tslib@1.14.1 extraneous
webidl-conversions@3.0.1 extraneous
whatwg-url@5.0.0 extraneous
wide-align@1.1.5 extraneous

#non essential dependencies:

text-text-nodered.json

JSON
a sample of texttotext-for node-red
[
    {
        "id": "b1a1e0e6.6b3f8",
        "type": "tab",
        "label": "Stable Diffusion",
        "disabled": false,
        "info": ""
    },
    {
        "id": "f8e1e8d5.3ebdb8",
        "type": "ui_text_input",
        "z": "b1a1e0e6.6b3f8",
        "name": "",
        "label": "Prompt",
        "tooltip": "",
        "group": "d3a41f4c.835e6",
        "order": 0,
        "width": "6",
        "height": "1",
        "passthru": true,
        "mode": "text",
        "delay": 300,
        "topic": "prompt",
        "x": 120,
        "y": 60,
        "wires": [
            [
                "d7dcd1f4.8b9f2"
            ]
        ]
    },
    {
        "id": "d7dcd1f4.8b9f2",
        "type": "change",
        "z": "b1a1e0e6.6b3f8",
        "name": "Store Prompt",
        "rules": [
            {
                "t": "set",
                "p": "prompt",
                "pt": "flow",
                "to": "payload",
                "tot": "msg"
            }
        ],
        "action": "",
        "property": "",
        "from": "",
        "to": "",
        "reg": false,
        "x": 300,
        "y": 60,
        "wires": [
            []
        ]
    },
    {
        "id": "d9c35769.b7e938",
        "type": "ui_button",
        "z": "b1a1e0e6.6b3f8",
        "name": "",
        "group": "d3a41f4c.835e6",
        "order": 2,
        "width": "3",
        "height": "1",
        "passthru": false,
        "label": "Generate Image",
        "tooltip": "",
        "color": "",
        "bgcolor": "",
        "icon": "",
        "payload": "",
        "payloadType": "str",
        "topic": "",
        "x": 120,
        "y": 180,
        "wires": [
            [
                "1cbbef4a.e8d3b1"
            ]
        ]
    },
    {
        "id": "1cbbef4a.e8d3b1",
        "type": "function",
        "z": "b1a1e0e6.6b3f8",
        "name": "Build Payload",
        "func": "msg.payload = {\n    prompt: flow.get('prompt'),\n    steps: flow.get('steps')\n};\nmsg.headers = {\n    'Content-Type': 'application/json'\n};\nreturn msg;",
        "outputs": 1,
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 290,
        "y": 180,
        "wires": [
            [
                "f9bf774a.798ef8"
            ]
        ]
    },
    {
        "id": "f9bf774a.798ef8",
        "type": "http request",
        "z": "b1a1e0e6.6b3f8",
        "name": "Stable Diffusion API",
        "method": "POST",
        "ret": "obj",
        "paytoqs": "ignore",
        "url": "http://127.0.0.1:7860/sdapi/v1/txt2img",
        "tls": "",
        "persist": false,
        "proxy": "",
        "authType": "",
        "x": 490,
        "y": 180,
        "wires": [
            [
                "bf0c8a0a.09b6e8"
            ]
        ]
    },
    {
        "id": "bf0c8a0a.09b6e8",
        "type": "function",
        "z": "b1a1e0e6.6b3f8",
        "name": "Process Image",
        "func": "var base64 = msg.payload.images[0].split(\",\", 1)[0];\nmsg.payload = {\n    image: \"data:image/png;base64,\" + base64\n};\nmsg.headers = {\n    'Content-Type': 'application/json'\n};\nmsg.imageData = base64;\nreturn msg;",
        "outputs": 1,
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 690,
        "y": 180,
        "wires": [
            [
                "54d264f1.75e9b4",
                "de5a57e4.b5c398"
            ]
        ]
    },
    {
        "id": "54d264f1.75e9b4",
        "type": "http request",
        "z": "b1a1e0e6.6b3f8",
        "name": "Get PNG Info",
        "method": "POST",
        "ret": "obj",
        "paytoqs": "ignore",
        "url": "http://127.0.0.1:7860/sdapi/v1/png-info",
        "tls": "",
        "persist": false,
        "proxy": "",
        "authType": "",
        "x": 890,
        "y": 180,
        "wires": [
            [
                "daf6d76c.a36d58"
            ]
        ]
    },
    {
        "id": "daf6d76c.a36d58",
        "type": "function",
        "z": "b1a1e0e6.6b3f8",
        "name": "Save Image",
        "func": "var base64Data = msg.imageData;\nvar parameters = msg.payload.info;\nvar fs = require('fs');\nvar path = '/path/to/output.png';\n\nfs.writeFile(path, base64Data, 'base64', function(err) {\n    if (err) {\n        node.error('Failed to save image', err);\n    } else {\n        node.log('Image saved successfully');\n        node.log('Parameters: ' + parameters);\n    }\n});\n\nreturn msg;",
        "outputs": 1,
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 1090,
        "y": 180,
        "wires": [
            []
        ]
    },
    {
        "id": "de5a57e4.b5c398",
        "type": "ui_template",
        "z": "b1a1e0e6.6b3f8",
        "group": "d3a41f4c.835e6",
        "name": "Display Image",
        "order": 3,
        "width": "6",
        "height": "6",
        "format": "<div style=\"text-align:center;\"><img src=\"{{msg.payload.image}}\" style=\"max-width:100%;height:auto;\"></div>",
        "storeOutMessages": true,
        "fwdInMessages": true,
        "templateScope": "local",
        "x": 920,
        "y": 240,
        "wires": [
            []
        ]
    },
    {
        "id": "c9a1c2ed.68d08",
        "type": "ui_text_input",
        "z": "b1a1e0e6.6b3f8",
        "name": "",
        "label": "Steps",
        "tooltip": "",
        "group": "d3a41f4c.835e6",
        "order": 1,
        "width": "6",
        "height": "1",
        "passthru": true,
        "mode": "number",
        "delay": 300,
        "topic": "steps",
        "x": 120,
        "y": 120,
        "wires": [
            [
                "ff4a3dfc.34c22"
            ]
        ]
    },
    {
        "id": "ff4a3dfc.34c22",
        "type": "change",
        "z": "b1a1e0e6.6b3f8",
        "name": "Store Steps",
        "rules": [
            {
                "t": "set",
                "p": "steps",
                "pt": "flow",
                "to": "payload",
                "tot": "msg"
            }
        ],
        "action": "",
        "property": "",
        "from": "",
        "to": "",
        "reg": false,
        "x": 300,
        "y": 120,
        "wires": [
            []
        ]
    },
    {
        "id": "d3a41f4c.835e6",
        "type": "ui_group",
        "name": "Default",
        "tab": "b3c927c4.5e4de8",
        "order": 1,
        "disp": true,
        "width": "6",
        "collapse": false
    },
    {
        "id": "b3c927c4.5e4de8",
        "type": "ui_tab",
        "name": "Home",
        "icon": "dashboard",
        "disabled": false,
        "hidden": false
    }
]

nano33_sample.ino

Arduino
It's nano33_sample code. I use it to test nano 33 with ESP32-CAM.
#include <ArduinoBLE.h> 
#include <SoftwareSerial.h>

//  RX/TX 
#define RX_PIN 2
#define TX_PIN 3

// 
#define BUTTON_PIN 7

// 
SoftwareSerial cameraSerial(RX_PIN, TX_PIN); 

// BLE service
BLEService customService("19B10000-E8F2-537E-4F6C-D104768A1214");
// BLE characteristic
BLEUnsignedCharCharacteristic takePhotoCharacteristic("19B10001-E8F2-537E-4F6C-D104768A1214", BLERead | BLENotify | BLEWrite);

void setup() {
  Serial.begin(115200);
  cameraSerial.begin(115200); //  ESP32-CAM 

  // 
  pinMode(BUTTON_PIN, INPUT_PULLUP);

  //  BLE ()
     if (!BLE.begin()) {
    Serial.println("starting BLE failed!");
    while (1);
  }
  // 
  BLE.setLocalName("Nano33Sense");
  BLE.setAdvertisedService(customService);

  // 
  customService.addCharacteristic(takePhotoCharacteristic);

  // 
  BLE.addService(customService);

  // 
  BLE.advertise();

  Serial.println("Bluetooth device active, waiting for connections...");
}
}

void loop() {
  //  ()
  if (cameraSerial.available()) {
    //  ESP32-CAM 
    char data = cameraSerial.read();
    // 
    Serial.print(data); 

  // 
  if (digitalRead(BUTTON_PIN) == LOW) {
    // 
    cameraSerial.write('C'); 
    Serial.println("Sent photo capture command.");
    delay(200); // 
  }

  //  ()
// 
  BLEDevice central = BLE.central();

  // 
  if (central) {
    Serial.print("Connected to central: ");
    Serial.println(central.address());

    // 
    while (central.connected()) {
      if (takePhotoCharacteristic.written()) {
        // 
        cameraSerial.write('C'); 
        Serial.println("Sent photo capture command.");
      }
    }

    // 
    Serial.print("Disconnected from central: ");
    Serial.println(central.address());
  }

}

esp32cam_test.ino

Arduino
it's a sample of esp32cam for capture with nano33
#include "esp_camera.h"
#include "Arduino.h"
#include "FS.h"                // SD  & SPIFFS 
#include "SD_MMC.h"            // SD 
#include "soc/soc.h"           //  PSRAM
#include "esp_system.h"        //  ESP32

// SD 
// 
// 
#define RX_PIN 3
#define TX_PIN 1
HardwareSerial NanoSerial(1);

//  (0-63, )
#define IMAGE_QUALITY 10

void setup() {
  Serial.begin(115200);
  NanoSerial.begin(115200, SERIAL_8N1, RX_PIN, TX_PIN);

  // 
  camera_config_t config;
  esp_err_t err = esp_camera_init(&config);
  
  if (psramFound()) {
    config.frame_size = FRAMESIZE_UXVGA; // 
    config.jpeg_quality = IMAGE_QUALITY;
    config.fb_count = 2;
  } else {
    Serial.println("PSRAM not found, capturing at lower resolution.");
    config.frame_size = FRAMESIZE_SVGA;
    config.jpeg_quality = IMAGE_QUALITY;
    config.fb_count = 1;
  }

esp_err_t err = esp_camera_init(&config);
  if (err != ESP_OK) {
    Serial.printf("Camera init failed with error 0x%x", err);
    return;
  }
  sensor_t * s = esp_camera_sensor_get();
  // Initial sensors are flipped vertically and mirrorred, so you need to rotate or mirror them
  s->set_vflip(s, true);
  s->set_hmirror(s, true);

  //  SD 
  if (!SD_MMC.begin("/sdcard", true)) {
    Serial.println("Card Mount Failed");
    return;
  }
  uint8_t cardType = SD_MMC.cardType();
  if (cardType == CARD_NONE) {
    Serial.println("No SD card attached");
    return;
  }

  Serial.println("Camera and SD card ready.");
}

void loop() {

   //  Arduino Nano 
  if (cameraSerial.available()) {
    char command = cameraSerial.read();
    if (command == 'C') {
      takeAndSavePhoto();
    }
  }

 if (NanoSerial.available()) {
    char command = NanoSerial.read();
    if (command == 'C') {
      takeAndSavePhoto();
    }
  }

  // 
  camera_fb_t * fb = esp_camera_fb_get();
  if (!fb) {
    Serial.println("Camera capture failed");
    return;
  }
  // 
  Serial.write(fb->buf, fb->len); 
  // 
  esp_camera_fb_return(fb); 
  delay(100); // 
}

void takeAndSavePhoto() {
  // 
  camera_fb_t * fb = esp_camera_fb_get();
  if (!fb) {
    Serial.println("Camera capture failed");
    return;
  }
  // 
  char filename[32];
  static int fileCounter = 0;
  sprintf(filename, "/picture%03d.jpg", fileCounter++);

  //  SD 
  File file = SD_MMC.open(filename, FILE_WRITE);
  if (!file) {
    Serial.println("Failed to open file for writing");
    esp_camera_fb_return(fb);
    return;
  }
  file.write(fb->buf, fb->len); 
  file.close();
  esp_camera_fb_return(fb);

  Serial.printf("Saved photo to %s\n", filename);
}

esp32cam_cellphone.ino

Arduino
it's esp32cama add wifi with nano and cellphone
#include "esp_camera.h"
#include "Arduino.h"
#include <WiFi.h>
#include <img_converters.h>
#include <PubSubClient.h>
#include <esp_timer.h>
#include <img_converters.h>
#include <fb_gfx.h>
#include <soc/soc.h>		  // disable brownout problems
#include <soc/rtc_cntl_reg.h> // disable brownout problems
#include <esp_http_server.h>
#include "include/SSU.h"
#include "FS.h"                // SD  & SPIFFS 
#include "SD_MMC.h"            // SD 
#include "esp_system.h"        //  ESP32
/***************  ***************/
// Wi-Fi info
const char *ssid = "";
const char *password = "";

//phone setting
String strTpImage = (mcuId + "_ssuaiotclass/esp32cam/image");
const char *tpImage = strTpImage.c_str(); // 

String strTpCmdSendImg = (mcuId + "_ssuaiotclass/esp32cam/cmd/sendimage");
const char *tpCmdSendImage = strTpCmdSendImg.c_str(); // 

// SD 
// 
// 
#define RX_PIN 3
#define TX_PIN 1
HardwareSerial NanoSerial(1);

//  (0-63, )
#define IMAGE_QUALITY 10

// ...

void setup() {
  Serial.begin(115200);
  NanoSerial.begin(115200, SERIAL_8N1, RX_PIN, TX_PIN);

  // 
  camera_config_t config;
  esp_err_t err = esp_camera_init(&config);
  
  if (psramFound()) {
    config.frame_size = FRAMESIZE_UXVGA; // 
    config.jpeg_quality = IMAGE_QUALITY;
    config.fb_count = 2;
  } else {
    Serial.println("PSRAM not found, capturing at lower resolution.");
    config.frame_size = FRAMESIZE_SVGA;
    config.jpeg_quality = IMAGE_QUALITY;
    config.fb_count = 1;
  }

esp_err_t err = esp_camera_init(&config);
  if (err != ESP_OK) {
    Serial.printf("Camera init failed with error 0x%x", err);
    return;
  }
  sensor_t * s = esp_camera_sensor_get();
  // Initial sensors are flipped vertically and mirrorred, so you need to rotate or mirror them
  s->set_vflip(s, true);
  s->set_hmirror(s, true);

  //  SD 
  if (!SD_MMC.begin("/sdcard", true)) {
    Serial.println("Card Mount Failed");
    return;
  }
  uint8_t cardType = SD_MMC.cardType();
  if (cardType == CARD_NONE) {
    Serial.println("No SD card attached");
    return;
  }

  Serial.println("Camera and SD card ready.");
}

void loop() {

   //  Arduino Nano 
  if (cameraSerial.available()) {
    char command = cameraSerial.read();
    if (command == 'C') {
      takeAndSavePhoto();
    }
  }

 if (NanoSerial.available()) {
    char command = NanoSerial.read();
    if (command == 'C') {
      takeAndSavePhoto();
    }
  }
void loop() {

   // 
  camera_fb_t * fb = esp_camera_fb_get();
  if (!fb) {
    Serial.println("Camera capture failed");
    return;
  }
  // 
  Serial.write(fb->buf, fb->len); 
  // 
  esp_camera_fb_return(fb); 
  delay(100); // 
}

void takeAndSavePhoto() {
  // 
  camera_fb_t * fb = esp_camera_fb_get();
  if (!fb) {
    Serial.println("Camera capture failed");
    return;
  }
  // 
  char filename[32];
  static int fileCounter = 0;
  sprintf(filename, "/picture%03d.jpg", fileCounter++);

  //  SD 
  File file = SD_MMC.open(filename, FILE_WRITE);
  if (!file) {
    Serial.println("Failed to open file for writing");
    esp_camera_fb_return(fb);
    return;
  }
  file.write(fb->buf, fb->len); 
  file.close();
  esp_camera_fb_return(fb);

  Serial.printf("Saved photo to %s\n", filename);
}

model_config_AI drawing master.json

JSON
it's for ai prompt design for ai drawing in LM Stuido.
{
  "name": "Config for Chat ID 1721971696938",
  "load_params": {
    "n_ctx": 2048,
    "n_batch": 512,
    "rope_freq_base": 0,
    "rope_freq_scale": 0,
    "n_gpu_layers": 0,
    "use_mlock": false,
    "main_gpu": 0,
    "tensor_split": [
      0
    ],
    "seed": -1,
    "f16_kv": true,
    "use_mmap": true,
    "no_kv_offload": false,
    "num_experts_used": 0
  },
  "inference_params": {
    "n_threads": 4,
    "n_predict": -1,
    "top_k": 40,
    "min_p": 0.05,
    "top_p": 0.95,
    "temp": 0.8,
    "repeat_penalty": 1.1,
    "input_prefix": "<start_of_turn>user\n",
    "input_suffix": "<end_of_turn>\n<start_of_turn>model\n",
    "antiprompt": [
      "<start_of_turn>user",
      "<start_of_turn>model",
      "<end_of_turn>"
    ],
    "pre_prompt": "Please play the role of an AI drawing master who is good at accurately depicting stories as prompts. Please draw \"(enter the picture/story you want to present)\"\n\nConvert it into prompt in the form of English words or short sentences. If it is a short sentence, please use () to cover the short sentence. You can customize the weight and give it the weight of prompt(:1~1.7). And start with \"json Format\", and finally use \"(realistic skin: 1.4) without changing any weights and entries, beautifully detailed, extremely detailed eyes and faces, beautifully detailed eyes , super detailed and beautiful,\nMasterpiece, Top Quality, Best Quality, Realistic, Unity, 8k Wallpaper, Official Art, Extremely Detailed CG Unity 8k Wallpaper, (Original: 1.2), (Photorealistic: 1.4), Ultra-Detailed, High Resolution, Ultra-detailed, fine detail , cinematic lighting,\n\n-- easynegative, paintings, sketches, (worst  quality:2), (low quality:2), (normal quality:2), dot, mole, lowres, normal quality, monochrome, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature,Make the ending.\n\nHere are examples:\n\nmanhwa style, (in the video frame:1.8), (in the video camera screen:1.8), (live streaming:2), (Barrage message:2), \n(Video screen:1.8), (video frame:1.8), (video camera frame display time:1.8), \n(video camera frame effects around the picture:1.7), \n(in front of the video camera), (In the picture, she was forced to take a sex video in the classroom, and from the video camera frame, we can see the miserable picture of her body covered in semen:1.3),\n\n(realistic skin: 1.4), beautifully detailed, extremely detailed eyes and faces, beautifully detailed eyes, super detailed and beautiful,\nMasterpiece, Top Quality, Best Quality, Realistic, Unity, 8k Wallpaper, Official Art, Extremely Detailed CG Unity 8k Wallpaper, (Original: 1.2), (Photorealistic: 1.4), Ultra-Detailed, High Resolution, Ultra-detailed, fine detail, cinematic lighting,\n\n-- easynegative, paintings, sketches, (worst  quality:2), (low quality:2), (normal quality:2), dot, mole, lowres, normal quality, monochrome, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature,\n\n--\nHere are 3 more Prompt examples:\n1.{masterpiece}, best quality, illustration, single,{1girl},{beautiful detailed eyes}, beautiful detailed stars, beautiful detailed galaxy, beautiful detailed sky, beautiful detailed water, beautiful ,{shorts}, cinematic lighting, blue eyes , silver hair ,white  uniform,{{blue plaid bow}},long sleeves, long hair, {{white dungarees}} ,{agoge}, braided ponytail, bright eyes, light smile,{star in eyes},{sparkling anime eyes},blue collar, blighting stars ,,\n\n2.{best quality}, {{masterpiece}}, {highres}, original, extremely detailed 8K wallpaper, 1girl, {an extremely delicate and beautiful},This is a cartoon style picture,In the garden under the setting sun sat a blonde young girl, holding a flower and light smile,hair flower,sleeveless,horizontal pupils,symbol-shaped pupils,green eyes,gradient background,depth of field,sitting,cinematic lighting, volume lighting, bloom effect, light particles,long hair,ahoge,,\n\n3.masterpiece, best quality, best quality,Amazing,Beautiful red eyes,finely detail,Depth of field,extremely detailed CG unity 8k wallpaper, masterpiece,(((Long dark white hair))),miko,(Hazy fog),{Fluttering hair},{Thick hair},{{{Gelatinous texture}}},{profile},(Ruins of beautiful details),{Close-up of people},{{{Smooth skin}}},(((upper body))),(Smooth and radiant skin),(Smooth and radiant face),Perfect details,Beautifully gorgeous necklace,\n\nNOW Please write a paragraph of XXXX according to the prompt format (for example: \"The woman wearing the Gundam mecha device must have long hair and a delicate face, and the Gundam mecha must be realistic\")",
    "pre_prompt_suffix": "",
    "pre_prompt_prefix": "",
    "seed": -1,
    "tfs_z": 1,
    "typical_p": 1,
    "repeat_last_n": 64,
    "frequency_penalty": 0,
    "presence_penalty": 0,
    "n_keep": 0,
    "logit_bias": {},
    "mirostat": 0,
    "mirostat_tau": 5,
    "mirostat_eta": 0.1,
    "memory_f16": true,
    "multiline_input": false,
    "penalize_nl": true
  }
}

model_config_conciseprompt.json

JSON
it's a prompt design for concise in lm studio or else.
{
  "name": "Config for Chat ID 1721954201726",
  "load_params": {
    "n_ctx": 2048,
    "n_batch": 512,
    "rope_freq_base": 0,
    "rope_freq_scale": 0,
    "n_gpu_layers": -1,
    "use_mlock": false,
    "main_gpu": 0,
    "tensor_split": [
      0
    ],
    "seed": -1,
    "f16_kv": true,
    "use_mmap": true,
    "no_kv_offload": false,
    "num_experts_used": 0
  },
  "inference_params": {
    "n_threads": 4,
    "n_predict": -1,
    "top_k": 40,
    "min_p": 0.05,
    "top_p": 0.95,
    "temp": 0.8,
    "repeat_penalty": 1.1,
    "input_prefix": "<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n",
    "input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
    "antiprompt": [
      "<|start_header_id|>",
      "<|eot_id|>"
    ],
    "pre_prompt": "Please simulate the prompt word generator in stable diffusion. Your task is to describe a scene as concisely as possible. You must first ask me to provide a theme concept, and then you will base it on \"theme (animals, people, places, objects...) + medium (photos, paintings, illustrations, sculptures, graffiti, tapestries...) + environment ( Indoor, outdoor, space, Narnia, underwater, Emerald City, etc....) + lighting (soft light, ambient light, fluorescent light, neon light, studio light....) + color (bright, soft , bright, solid color, color, black and white...) + mood (heavy, calm, noisy, exciting...) + composition (portrait, headshot, close-up, bird's-eye view...) + The painter's name (Picasso...)\" is combined into prompt words (directly combined into sentences without explanation), and 5 prompt words with the same theme must be combined.\nA single prompt word should not exceed 50 words, and each prompt word should be translated into English. Finally, I'm asked if I'm going to end the current task or move on to a different one.\nWell, you start by asking me to provide a theme concept.",
    "pre_prompt_suffix": "",
    "pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
    "seed": -1,
    "tfs_z": 1,
    "typical_p": 1,
    "repeat_last_n": 64,
    "frequency_penalty": 0,
    "presence_penalty": 0,
    "n_keep": 0,
    "logit_bias": {},
    "mirostat": 0,
    "mirostat_tau": 5,
    "mirostat_eta": 0.1,
    "memory_f16": true,
    "multiline_input": false,
    "penalize_nl": true
  }
}

ESP32 TFT CODE SETTING

Arduino
CODE SETTING
#define TFT_DC   21
#define TFT_CS   17
#define TFT_MOSI 23
#define TFT_CLK  18
#define TFT_RST  22
#define TFT_MISO 19

Adafruit_ILI9341 tft = Adafruit_ILI9341(TFT_CS, TFT_DC, TFT_MOSI, TFT_CLK, TFT_RST, TFT_MISO);

model_config_image prompt machine.json

JSON
it's a image prompt machine.
{
  "name": "Config for Chat ID 1721981219818",
  "load_params": {
    "n_ctx": 2048,
    "n_batch": 512,
    "rope_freq_base": 0,
    "rope_freq_scale": 0,
    "n_gpu_layers": -1,
    "use_mlock": false,
    "main_gpu": 0,
    "tensor_split": [
      0
    ],
    "seed": -1,
    "f16_kv": true,
    "use_mmap": true,
    "no_kv_offload": false,
    "num_experts_used": 0
  },
  "inference_params": {
    "n_threads": 4,
    "n_predict": -1,
    "top_k": 40,
    "min_p": 0.05,
    "top_p": 0.95,
    "temp": 0.8,
    "repeat_penalty": 1.1,
    "input_prefix": "<start_of_turn>user\n",
    "input_suffix": "<end_of_turn>\n<start_of_turn>model\n",
    "antiprompt": [
      "<start_of_turn>user",
      "<start_of_turn>model",
      "<end_of_turn>"
    ],
    "pre_prompt": "\"Create an \"imagine prompt in  json Format\" with a word count limit of 1,500 words for the AI-based text-to-image program for stable diffusion using the following parameters: {\"prompt\": \"[1], [2], [3], [4], [5], [6].\"}\n\nIn this prompt, [1] should be replaced with a user-supplied concept and [2] should be a concise, descriptive summary of the subject. Ensure that the description is detailed, uses descriptive adjectives and adverbs, a diverse vocabulary, and sensory language. Offer context and background information regarding the subject and consider the image's perspective and point of view. Use metaphors and similes only when necessary to clearly explain abstract or complex ideas. Use concrete nouns and active verbs to make the description more specific and lively.\n\n[3] should be a concise summary of the scene's environment. Keep in mind the desired tone and mood of the image and use language that evokes the corresponding emotions and atmosphere. Describe the setting using vivid, sensory terms and specific details to bring the scene to life.\n\n[4] should be a concise description of the mood of the scene, using language that conveys the desired emotions and atmosphere.\n\n[5] should be a concise description of the atmosphere, using descriptive adjectives and adverbs to create the desired atmosphere while considering the overall tone and mood of the image.\n\n[6] should be a concise description of the lighting effect, including types of lights, displays, styles, techniques, global illumination, and shadows. Describe the quality, direction, color, and intensity of the light and how it impacts the mood and atmosphere of the scene. Use specific adjectives and adverbs to portray the desired lighting effect and consider how it will interact with the subject and environment.\n\nIt's important to remember that the descriptions in the prompt should be written together, separated only by commas and spaces, and should not contain any line breaks or colons. Brackets and their contents should not be included, and the prompt should always start with \"\"prompt\": \".\n\nEnsure that the grammar is consistent and avoid using cliches or excess words. Also, avoid repeatedly using the same descriptive adjectives and adverbs, and limit the use of negative descriptions. Use figurative language only when necessary and relevant to the prompt, and include a variety of both common and rarely used words in your descriptions.\n\nThe \"imagine prompt\" must not exceed 1,500 words. The prompt should include the end arguments \"--c X --s Y --q 2,\" where X is a whole number between 1 and 25 and Y is a whole number between 100 and 1000. If the subject looks better vertically, add \"--ar 2:3\" before \"--c,\" and if it looks better horizontally, add \"--ar 3:2\" before \"--c.\" Please randomize the end argument format and fix \"--q 2.\" Donot use double quotation marks or punctuation marks, and use a randomized end suffix format.\n\nWait for a {concept} to be provided before generating the prompt.\"\n",
    "pre_prompt_suffix": "",
    "pre_prompt_prefix": "",
    "seed": -1,
    "tfs_z": 1,
    "typical_p": 1,
    "repeat_last_n": 64,
    "frequency_penalty": 0,
    "presence_penalty": 0,
    "n_keep": 0,
    "logit_bias": {},
    "mirostat": 0,
    "mirostat_tau": 5,
    "mirostat_eta": 0.1,
    "memory_f16": true,
    "multiline_input": false,
    "penalize_nl": true
  }
}

model_config_Randomly generated prompts.json

JSON
it's for a randomly generated prompts.
{
  "name": "Config for Chat ID 1721967007245",
  "load_params": {
    "n_ctx": 2048,
    "n_batch": 512,
    "rope_freq_base": 0,
    "rope_freq_scale": 0,
    "n_gpu_layers": 0,
    "use_mlock": false,
    "main_gpu": 0,
    "tensor_split": [
      0
    ],
    "seed": -1,
    "f16_kv": true,
    "use_mmap": true,
    "no_kv_offload": false,
    "num_experts_used": 0
  },
  "inference_params": {
    "n_threads": 4,
    "n_predict": -1,
    "top_k": 40,
    "min_p": 0.05,
    "top_p": 0.95,
    "temp": 0.8,
    "repeat_penalty": 1.1,
    "input_prefix": "<start_of_turn>user\n",
    "input_suffix": "<end_of_turn>\n<start_of_turn>model\n",
    "antiprompt": [
      "<start_of_turn>user",
      "<start_of_turn>model",
      "<end_of_turn>"
    ],
    "pre_prompt": "Help me select one option from @a, @b, @c, @d, @e, @f, and use \", \" to separate the options. Then break the line and select each one from @g and @h. Three options appear, each option is separated by \",\"\n@a\n1. Doctor\n2. Teacher\n3. Engineer\n4. Lawyer\n5. Chef\n6. Accountant\n7. Artist\n8. Writer\n9. Scientist\n10. Athlete\n11. Musician\n12. Entrepreneur\n13. Politician\n14. Pilot\n15. Astronaut\n16. Police officer\n17. Firefighter\n18. Fashion designer\n19. Architect\n20. Psychologist\n21. Witch\n22. Sorceress\n23. Mage\n24. Enchantress\n25. Druidess\n26. Priestess\n27. Summoner\n28. Alchemist\n29. Warlock\n30. Shaman\n31. Paladin\n32. Fairy queen\n33. Oracle\n34. Necromancer\n35. Elemental mage\n36. Siren\n37. Knight\n38. Demon hunter\n39. Dragon tamer\n40. Witch hunter\n\n@b\n1. Wide-angle lens\n2. Telephoto lens\n3. Prime lens\n4. Zoom lens\n5. Macro lens\n6. Fish-eye lens\n7. Tilt-shift lens\n8. Portrait lens\n9. Landscape lens\n10. Standard lens\n\n@c\n1. Bird's-eye view\n2. Worm's-eye view\n3. High-angle view\n4. Low-angle view\n5. Close-up view\n6. Wide-angle view\n7. Telephoto view\n8. Panoramic view\n9. Dutch angle view\n10. Point-of-view (POV) view\n\n@d\n1. Bob\n2. Pixie cut\n3. Long layers\n4. Bangs\n5. Ponytail\n6. Updo\n7. Braid\n8. Top knot\n9. Beach waves\n10. Half-up, half-down\n\n@e\n1. Smile\n2. Frown\n3. Laugh\n4. Cry\n5. Surprise\n6. Anger\n7. Disgust\n8. Fear\n9. Confusion\n10. Excitement\n\n@f\n1. Forest\n2. Beach\n3. Cityscape\n4. Mountains\n5. Desert\n6. Skyline\n7. Countryside\n8. Underwater\n9. Sunset\n10. Starry night\n\n@g\n1. Shading\n2. Highlights\n3. Shadows\n4. Depth\n5. Texture\n6. Contrast\n7. Blending\n8. Brushstrokes\n9. Gradients\n10. Details\n11. Patterns\n12. Translucency\n13. Reflections\n14. Surface\n15. Illumination\n16. Opacity\n17. Hatching\n18. Subtle\n19. Luminosity\n20. Glimmer\n\n@h\n1. Stunning\n2. Lifelike\n3. Vibrant\n4. Breathtaking\n5. Seamless\n6. Exquisite\n7. Impressive\n8. Striking\n9. Immersive\n10. Captivating\n11. Dynamic\n12. Realistic\n13. Masterful\n14. Polished\n15. Marvelous\n16. Sophisticated\n17. Enchanting\n18. High-quality\n19. Eye-catching\n20. Photorealistic\n\nWhen answering, you dont need to add @a~@h before the options. Just use \", \" to separate each option directly, and no other words. Add a new line at the end: \"-- easynegative, (backlight:1.3) , paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2),\"\n\nFor example:\nscientist, portrait lens, close-up view, ponytail, anger, mountains, blending, Shading, Glimmer, captivating, lifelike, exquisite.\n\n-- easynegative, (backlight:1.3), paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2),\n",
    "pre_prompt_suffix": "",
    "pre_prompt_prefix": "",
    "seed": -1,
    "tfs_z": 1,
    "typical_p": 1,
    "repeat_last_n": 64,
    "frequency_penalty": 0,
    "presence_penalty": 0,
    "n_keep": 0,
    "logit_bias": {},
    "mirostat": 0,
    "mirostat_tau": 5,
    "mirostat_eta": 0.1,
    "memory_f16": true,
    "multiline_input": false,
    "penalize_nl": true
  }
}

uploadpic.py

Python
it's a sample for upload pictures to google cloud for generating ppts.
from googleapiclient.discovery import build
from googleapiclient.http import MediaFileUpload
from google.oauth2 import service_account

#  Google Cloud 
CREDENTIALS_FILE = 'path/to/your/credentials.json'
SCOPES = ['https://www.googleapis.com/auth/drive']

#  Google Drive  ID
FOLDER_ID = 'your_folder_id'

# 
IMAGES_FOLDER = r'C:\your_paths'

def upload_images():
    """ Google Drive"""

    credentials = service_account.Credentials.from_service_account_file(CREDENTIALS_FILE, scopes=SCOPES)
    service = build('drive', 'v3', credentials=credentials)

    for filename in os.listdir(IMAGES_FOLDER):
        filepath = os.path.join(IMAGES_FOLDER, filename)
        if os.path.isfile(filepath):
            media = MediaFileUpload(filepath, mimetype='image/jpeg')  #  mimetype
            file_metadata = {
                'name': filename,
                'parents': [FOLDER_ID]
            }
            file = service.files().create(body=file_metadata,
                                        media_body=media,
                                        fields='id').execute()
            print(f': {filename} (ID: {file.get("id")})')

if __name__ == '__main__':
    upload_images()

ptt.json

JSON
it's a sample json for ppt.
[
    {
      "slideTitle": "VBA ",
      "slideContent": [
        "VBA Visual Basic for Applications",
        " Office  ExcelWord ",
        "VBA "
      ],
      "imagePath": "https://source.unsplash.com/960x640/?VBA"
    },
    {
      "slideTitle": "VBA ",
      "slideContent": [
        " Excel VBA ",
        "",
        "VBA "
      ],
      "imagePath": "https://source.unsplash.com/960x640/?VBA,Excel"
    },
    {
      "slideTitle": "VBA ",
      "slideContent": [
        "VBA  Office ",
        " VBA ",
        "VBA "
      ],
      "imagePath": "https://source.unsplash.com/960x640/?programming"
    }
  ]
  

sameple_code_vba.ini

VBScript
it's a vba for ppt to generate ppts.
Sub CreatePowerPointFromJSON(jsonData As String)
    '  JSON  JSON 
    Dim json As Object
    Set json = JsonConverter.ParseJson(jsonData)

    '  PowerPoint 
    Dim pptApp As Object
    Set pptApp = CreateObject("PowerPoint.Application")
    pptApp.Visible = True

    '  PowerPoint 
    Dim pptPres As Object
    Set pptPres = pptApp.Presentations.Add

    '  JSON 
    Dim i As Long
    For i = 1 To json.Count
        ' 
        Dim slideData As Object
        Set slideData = json(i)

        ' 
        Dim slide As Object
        Set slide = pptPres.Slides.Add(pptPres.Slides.Count + 1, ppLayoutTitleAndContent)

        ' 
        slide.Shapes(1).TextFrame.TextRange.Text = slideData("slideTitle")

        ' 
        Dim j As Long
        Dim content As String
        For j = 1 To slideData("slideContent").Count
            content = content & "- " & slideData("slideContent")(j) & vbCrLf
        Next j
        slide.Shapes(2).TextFrame.TextRange.Text = content

        ' 
        If slideData.Exists("imagePath") Then
            slide.Shapes.AddPicture slideData("imagePath"), False, True, 0, 0
        End If
    Next i
End Sub

' 
Function ReadTextFile(filePath As String) As String
    Dim fso As Object, file As Object
    Dim fileContent As String

    Set fso = CreateObject("Scripting.FileSystemObject")
    Set file = fso.OpenTextFile(filePath, 1) ' 1  ForReading

    fileContent = file.ReadAll
    file.Close
    Set file = Nothing
    Set fso = Nothing

    ReadTextFile = fileContent
End Function

'  JSON  PowerPoint
Sub TestCreatePowerPoint()
    '  JSON 
    Dim filePath As String
    filePath = "C:\downloads\ptt.json"

    '  jsonData 
    Dim jsonData As String
    jsonData = ReadTextFile(filePath)

    '  PowerPoint
    Call CreatePowerPointFromJSON(jsonData)
End Sub

Terraform_sample.rar

YAML
use terraform to use Google Cloud Platform
No preview (download only).

openai_test_sample.ino

Arduino
a sample test of openai
#include <WiFi.h>
#include <HTTPClient.h>
#include <ArduinoJson.h>

// Replace with your network credentials
const char *ssid     = "";     //Wifi
const char *password = "";     //Wifi

// Replace with your OpenAI API key
const char* apiKey = "";

void setup() {
  // Initialize Serial
  Serial.begin(115200);

  // Connect to Wi-Fi network
  WiFi.mode(WIFI_STA);
  WiFi.begin(WIFI_SSID, WIFI_PASSWORD);
  Serial.print("Connecting to WiFi ..");
  while (WiFi.status() != WL_CONNECTED) {
      Serial.print('.');
      delay(1000);
  }
  Serial.println(WiFi.localIP());

  // Send request to OpenAI API
  String inputText = "Hello, ChatGPT!";
  String apiUrl = "https://api.openai.com/v1/completions";
  String payload = "{\"prompt\":\"" + inputText + "\",\"max_tokens\":100, \"model\": \"text-davinci-003\"}";

  HTTPClient http;
  http.begin(apiUrl);
  http.addHeader("Content-Type", "application/json");
  http.addHeader("Authorization", "Bearer " + String(apiKey));
  
  int httpResponseCode = http.POST(payload);
  if (httpResponseCode == 200) {
    String response = http.getString();
  
    // Parse JSON response
    DynamicJsonDocument jsonDoc(1024);
    deserializeJson(jsonDoc, response);
    String outputText = jsonDoc["choices"][0]["text"];
    Serial.println(outputText);
  } else {
    Serial.printf("Error %i \n", httpResponseCode);
  }
}

void loop() {
  // do nothing
}

Credits

林望儒

林望儒

1 project • 1 follower
劉育庭

劉育庭

1 project • 1 follower
Thanks to Kun-Neng Hung.

Comments