Introduction > Overview and Installation > Questions and Answers
Questions and Answers: Character Creation
- ◯
- How to create a 2D character image?
- ・
- For example, you can create it with the following steps: Generate a character image - Change the image around the mouth (inpaint, etc.) - Mask the background (rembg, etc.) - Make the mask transparent (alpha conversion) - The mask alpha conversion node is also provided by this tool.
- ◯
- How to create 2D character images for microcontrollers?
- ・
- For example, you can create it by following the steps below: Create a set of difference images of only the mouth area (cut out with an image editing tool, etc.) ~ Specify the position of the mouth area image (dynamic image) relative to the character image (static image) in the configuration file of this tool.
- ◯
- How can I change the facial expressions (joy, anger, sadness, happiness) of a 2D character image based on the content of the response?
- ・
- The text output by the language generation node is analyzed by the sentiment analysis node -- the resulting text is then matched to image file names for 2D character expressions (a text conversion node is also provided for this purpose).
- ◯
- How do I make characters talk to each other?
- ・
- In the second turn, the roles are reversed and the player is switched - a memory node is used to remember the content of the response from the previous turn.
- ◯
- How do I set up complex triggers/actions for my characters?
- ・
- Currently, the web-based client app supports receiving triggers. The following client app also has plugins that can send/receive triggers: Virt-A-Mate
- ◯
- How do I increase the number of triggers/actions for a character (default 4)?
- ・
- Apply the patch file (*_custom.yml) to the container startup file -- this will increase the number of receiving endpoints of the intermediate connector to 4 or more (changes: command: ... --numset N --numget N)
- ◯
- How do I create complex (branching, repetitive) conversation turns (workflows)?
- ・
- Use the CLI workflow (running in Python) - the GUI workflow is just a wrapper around the CLI workflow
- ◯
- How can I use the generated text and images in the GUI workflow (ComfyUI) environment?
- ・
- Use a node that extracts the raw data from the data key (all raw data flows through the intermediate connector, which is connected by the data key - basically, no raw data flows to the GUI workflow side (ComfyUI))
- ◯
- How can I save the generated images, audio, etc. to a file?
- ・
- Use a file storage node (if multiple files are automatically saved, the file name will use the data key (in ascending order of creation time))
- ◯
- How do you make a character speak according to a given scenario?
- ・
- Use the Language Output node for standard output or the Audio Output node (lists of text and audio files must be prepared in advance).
- ◯
- How can you adjust the timing of a character's speech to fit the scenario?
- ・
- Use a speech recognition node that outputs a null string (such as "yeah" or "uh huh") and moves on to the next node for any utterance.
Questions and Answers: How the tool works
- ◯
- How can I use this tool from my already installed GUI workflow environment (ComfyUI)?
- ・
- Obtain and install the custom node using git.
- ◯
- How do I change the location of the app authentication (API KEY) file? / How do I specify my own authentication file?
- ・
- Apply the patch file (*_custom.yml) to the container startup file (change: env_file: ...)
- ◯
- How do I change the external port number of my server?
- ・
- Apply the patch file (*_custom.yml) to the container startup file (change: ports: ...)
- ◯
- How do I change the address of an intermediate connector or content server?
- ・
- Change the following items (according to the customization method) in the following files: cnf/xoxxox_cnfsrv_nodmid.json (xoxxox_appmid_rem), cnf/xoxxox_cnfsrv_nodweb.json (xoxxox_appweb_rem)
- ◯
- How do I change the server startup options?
- ・
- Apply the patch file (*_custom.yml) to the container startup file (change: command: ...)
- ◯
- How to run a server on GPU?
- ・
- Apply the patch file (*_custom.yml) to the container startup file (add the GPU recognition item in the format according to the container version). Also, specify the GPU device from the server startup options (argument: --device cuda).
- ◯
- How do I start all the servers I need at once?
- ・
- Create a custom folder for launching, and list the dependent plugins one line at a time in the following file (then launch the folder with the name just like a normal plugin (servers will be launched automatically by the dependent plugins): imoort/custom_*/manage/cnf/depend.txt
- ◯
- How can multiple users share this tool (intermediate connectors and workflows) on a cloud server, etc.?
- ・
- Tool resources are divided by managing container networks/containers/ports for each user and distributing them using configuration files and proxy servers (security can be further strengthened by authenticating with a proxy server and creating an account for each user, and making containers rootless) -- Even in this case, all users can use the same GUI workflow (because individual server names and port numbers are written in the configuration file) -- However, the number of tools that can be run depends on the constraints of the server's resources (CPU and GPU)
- ◯
- How can I run GUI workflow (ComfyUI) conversations from a remote server on a PC or mobile device?
- ・
- Change the script to one that applies the websocket API separately (included in the plugin (custom > ...))
- ◯
- How do I customize my tools?
- ・
- All executables/configuration files for the tool can be overwritten from a custom folder (you can create as many custom folders as you like, so you can choose the ones that apply based on your situation and needs)
- ◯
- How do I create and publish a plugin for a tool?
- ・
- Publish your custom folder in Git format
- ◯
- How can I use a published, user-created plugin?
- ・
- Add the plugin address to the patch file for the source list file ( use at your own risk )