Introduction > Plugins: Basic > Individual: tlkuni
cnnmmd_xoxxox_tlkuni
Clients: Chat client (game engine (Unity))
Implementation/Rights (Source Code/License): [※E]
- ・
- https://github.com/cnnmmd/cnnmmd_xoxxox_tlkuni
- ※E
- This code is available to the public so you can see how it works (it's still being cleaned up for collaboration).
dependence:
overview
Chat client for game engines:
- ・
- Language used: C# (Unity)
- ・
- Development environment: Unity ( https://unity.com/ )
Installation: Manual
- *
- If individual settings are required before installation, modify the file according to the steps below.
Copy the following files to any folder in the game engine (Unity) project pane:
# Transcription source: > cnnmmd/export/app/xoxxox > appuni
# Post to: > Assets/*
Add the following audio playback components to any object (such as a character) in your scene in the Hierarchy pane:
> AudioSource
Add the following components to that object (after addition, each component item should reflect the parameters from the individual files above):
> RcvVce # Receive audio > SndVce # Send audio
Setting: Setting
The individual configuration files are as follows (the contents of the files that specify addresses are also shown): [*A] [*B]
# Individual settings: > cnnmmd/import/cnnmmd_xoxxox_tlkuni/export/app/xoxxox/appuni/bin > Params.cs - public static string srvadr = "127.0.0.1" // Relay server address
- ※A
- To modify a plugin file, copy it to a custom folder in the same location, modify the copied file, and then update the main (export) version as needed:
# Plugin side: > cnnmmd/import/${plugin}/export/.../${target} # Custom folder side: > cnnmmd/import_custom/${custom}/export/.../${target}
$ cd cnnmmd/manage/bin $ ./remove.sh && ./append.sh
execution
Start the necessary servers and run the workflow.
Next, run the game engine (Unity) - this is where the conversation begins. [※1][※C]
- *1
- The device for audio input is automatically selected as the active microphone on the OS side.
- ※C
- At the beginning of a conversation or when switching speakers, the speech synthesis engine takes some time (possibly minutes on a CPU) to load the dictionary and individual models - while the speech synthesis node display is active, models are being loaded or speech data is being generated.