Have something to say?

Tell us how we could make the product more useful to you. For Support reach out at [email protected].

Planned

Test Case Expected Results

When creating Test Cases, capture expected results (in addition to capturing the name and test conditions as Lamatic does currently). Consider also, Using code checks and AI to find deviation and grade result quality. Use this quality score as an input for deciding whether to deploy an update. Assist in recommending the best option when A|B testing prompts, agents, workflows, models and other settings such as temperature. Using these results to create benchmarks. Measuring and alerting about “model drift” over time. Integrating “human-in-the-loop” strategies to expand Test Cases and quality checking. Added Later (based on feedback from another user): Make it easier to see the history of Test Case executions (without leaving Experiments). Marc Greenberg didn’t realize he could find the results in Logs, but also didn’t like having to dig through logs to find the execution details for a given test.

cwhiteman 2 months ago

💡 Feature Requests

Planned

Deactivate individual widgets?

It might be helpful if there was a way to deactivate individual widgets, but leave them staged in the workflow, but turn them off while I’m experimenting with other configurations. For instance and the moment I have a code widget “Flatten array”, which is not connected in the workflow, but it seems to be breaking the API call even though it isn’t connected. My widget here with a bit of code is probably just rubbish and I will probably delete it in the end, But it would be nice if I could just deactivate it while I troubleshoot the other parts of the workflow, so I don’t need to delete it to test the other widgets. My apologies, maybe there is a way to do this already that I’m not aware.

celso.wilkinson About 2 months ago

1

💡 Feature Requests

Planned

Web Crawl Node

Create a Web Crawl Node to allow Lamatic to ingest and store data from a website to be accessed downstream. Specific requirements: 1) Accept a URL, multiple URLs, a sitemap.xml, or a URL pattern regex as a parameter to direct the crawl. Consider what would be needed to accept login credentials as a param to enable ingesting content behind a login (future). 2) Support the ability to store important metadata along with the visible content (for instance, the hreflang tag that will indicate the language & geography of the target audience). Consider a default metadata capture config that can be optionally reduced or expanded (future). 3) Generate status and error messaging: Accept parameters that configure status notifications (what notifications to send & to whom?). Thoughts/Questions: How to tell the Node where to store the crawled content? How should the visibility of this store be determined (in other words, is this store available only to this workflow, others within the Project, or others within the Organization)? Should the Node support an option to simultaneously vectorize the content as part of the crawl (to reduce storage needs)? Can/should we use the attribute on the Sitemap.xml to trigger this Node? Can/should the Node capture Image, Audio, and/or Video content? How should notifications & error handling work (regarding the status of the crawl)?

cwhiteman 2 months ago

1

💡 Feature Requests