New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it necessary to manually build the code for every new intent? #1257
Comments
I have the same problem, the only difference between your project and mine is that I am using the "childs" option to create more than one bot, and all of them are constantly updated to improve performance. Did you get any solution? |
Hi! No I did not receive any feedback yet, but thanks for chiming in
Em ter, 24 de jan de 2023 21:39, Gabriel Azevedo ***@***.***>
escreveu:
… I have the same problem, the only difference between your project and mine
is that I am using the "childs" option to create more than one bot, and all
of them are constantly updated to improve performance.
Did you get any solution?
—
Reply to this email directly, view it on GitHub
<#1257 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOV67K2A53VQRZVDWYYIKG3WUBY3FANCNFSM6AAAAAAT5IMT2Q>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Hi @jalq1978 , modifying the corpus means the model needs to be retrained, because the changes will probably affect the already calculated weights. In the examples you can see the bot is usually trained on startup. |
Hi! I was doing some tests and I found that when we reset the containers that are inside dock.js the system stops recognizing the old intentions, or recognizes new ones (depending on the change made), at least here it worked correctly. This is the code snippet I added in the file: I did this before @aigloss shared his knowledge about the tool here, maybe it's not the right way, but I believe it's a start to solve your question @jalq1978. |
Hello, const { dockStart } = require('@nlpjs/basic');
(async () => {
const dock = await dockStart();
const nlp = dock.get('nlp');
await nlp.train();
let response = await nlp.process('en', 'Who are you');
console.log(response.intent, response.score);
response = await nlp.process('en', 'quantum physics');
console.log(response.intent, response.score);
delete nlp.nluManager.domainManagers.en.domains.master_domain.intentsArr;
nlp.addDocument('en', 'what is quantum physics', 'quantum.physics');
nlp.addDocument('en', 'tell me about quantum physics', 'quantum.physics');
await nlp.train();
response = await nlp.process('en', 'Who are you');
console.log(response.intent, response.score);
response = await nlp.process('en', 'quantum physics');
console.log(response.intent, response.score);
})(); It gives this output:
The "strange" thing to do here is delete nlp.nluManager.domainManagers.en.domains.master_domain.intentsArr; This intentsArr is recalculated to avoid doing Object.keys each time we have to process an utterance. I think that a better approach is to consider this a bug, and when someone does an "addDocument", then automatically remove it. |
We are building a NLP for a WhatsApp chatbot. This is a very dynamic chatbot that will need constant training by adding new intents and utterances. To do that, we built a frontend (see below)

We were wondering if for every new intent OR utterance we will have to go to our backend code on Lambda, code the intent handler, and build it again or if that is a more automatic approach that would allow us to just do whatever we need to do on our frontend and click build from there and have it published in production. Is that something we are missing here?
The text was updated successfully, but these errors were encountered: