Introduction > Plugins: Basic > Individual: tttlcp
cnnmmd_xoxxox_tttlcp
Engines: AI Related: Language Generation Engine: Local (Gemma 3n [CPU/GPU])
Implementation/Rights (Source Code/License):
- ・
- https://github.com/cnnmmd/cnnmmd_xoxxox_tttlcp
dependence:
- ・
- cnnmmd_xoxxox_libtlk
overview
This is a language generation environment and model (llama.cpp + quantized model) that can be run on a general-purpose PC (CPU only): [※1]
- ・
- https://github.com/ggml-org/llama.cpp
- ・
- https://huggingface.co/ggml-org/gemma-3n-E2B-it-GGUF
- ・
- Operating environment: Local (CPU/GPU)
- *1
- Gemma 3n is multimodal, but this version is quantized to run in the smallest environments and currently does not include image recognition.
Installation
This plugin creates the following resources: [*1] [*2]
- ・
- Container: xoxxox_envlcp: 00.7GB
- ・
- Folder: import/cnnmmd_xoxxox_ttslcp/applcp: 05.0GB
- *1
- The container will work with 4GB of memory (RAM), but it will take several minutes to start the model (and subsequent conversations will tend to be slow). With 6GB - 16GB, the response will be reasonably fast.
- *2
- Since it runs on a container, Metal is not available even on Mac.