Getting Started > Plugins: Basic > Individual: tlkuni
cnnmmd_xoxxox_tlkuni
Clients: Chat client (game engine (Unity))
dependence:
overview
A chat client for the game engine:
- ・
- Language used: C# (Unity)
- ・
- Development environment: Unity
Installation: Manual
- *
- If individual settings are required before installation, modify the file following the steps below.
Copy the following files into any folder in the game engine (Unity) project pane:
# Transcription source: > ${dirtop}/cnnmmd/export/app/xoxxox > appuni
# Post to: > Assets/*
Add the following audio playback components to any object (such as a character) in your Scene in the Hierarchy pane:
> AudioSource
Add the following as components to that object (after addition, the components should reflect the parameters from the individual files above):
> RcvVce # Receive audio > SndVce # Send audio
Setting: Setting
The individual configuration files are as follows (the contents of the files that specify addresses are also shown): [※A] [※B]
# Individual settings: > ${dirtop}/cnnmmd/import/cnnmmd_xoxxox_tlkuni/export/app/xoxxox/appuni/bin > Params.cs - public static string srvadr = "127.0.0.1" // Address of the intermediate connector
- ※A
- If you want to modify a plugin file, copy it to your custom folder in the same layout, modify the copied file, and then update the main (export) version as needed:
# Plugin side: > ${dirtop}/cnnmmd/import/${plugin}/export/.../${target} # Custom folder side: > ${dirtop}/cnnmmd/import/${custom}/export/.../${target}
$ cd ${dirtop}/cnnmmd/manage/bin $ ./remove.sh && ./append.sh
- ※B
- The custom folder is a folder in Import (by default, a folder called “import/custom”) that contains the following files:
> manage/cnf/latest.txt
execution
Start the necessary servers and run the workflow.
Next, run the game engine (Unity) -- this is where the conversation begins. [※1][※C]
- *1
- The device for audio input is automatically selected as the active microphone on the OS side.
- ※C
- At the beginning of a conversation or when switching speakers, it takes some time (possibly minutes on a CPU) for the speech synthesis engine to load the dictionary and individual models -- while the speech synthesis node display is active, models are loading or speech data is being generated.