Have something to say?

Tell us how we could make the product more useful to you. For Support reach out at hello@lamatic.ai.

In Progress

Test Case Expected Results

When creating Test Cases, capture expected results (in addition to capturing the name and test conditions as Lamatic does currently). Consider also, Using code checks and AI to find deviation and grade result quality. Use this quality score as an input for deciding whether to deploy an update. Assist in recommending the best option when A|B testing prompts, agents, workflows, models and other settings such as temperature. Using these results to create benchmarks. Measuring and alerting about “model drift” over time. Integrating “human-in-the-loop” strategies to expand Test Cases and quality checking. Added Later (based on feedback from another user): Make it easier to see the history of Test Case executions (without leaving Experiments). Marc Greenberg didn’t realize he could find the results in Logs, but also didn’t like having to dig through logs to find the execution details for a given test.

cwhiteman Over 1 year ago

15

💡 Feature Requests

Support for Local Development

An open-source SDK to address developer adoption barriers and vendor lock-in concerns that are currently limiting growth. Problem Developers resist Lamatic's no-code/low-code platform due to limited library support, lack of local development capabilities, missing CLI tools, and weak version control integration Enterprise and startup customers worry about vendor lock-in and business continuity if Lamatic changes pricing or discontinues service Competitors offering open-source alternatives have an advantage Proposed Solution Build a standalone, open-source SDK enabling developers to: Develop AI agents/systems locally in their preferred IDE Test and debug offline without cloud dependency Use Git for version control across all project components Create custom integrations and capabilities Deploy anywhere (on-premise, private/public cloud, or Lamatic-managed) Compile flows into executable code via CI/CD pipelines Core Architecture YAML-based configuration system with: Pre-built node libraries for AI components Containerized runtime (Pod + Core) executing configurations Compiler for optimizing YAML into executable workers Local web editor accessible via localhost CLI for all development operations Multiple invocation methods (API, webhooks, widgets)

Aman Sharma About 3 hours ago

💡 Feature Requests

Support For dynamic File References in Flow Config

The nodes are embedded inline with formatting artifacts (hello\\nhello), apparent test content, and no versioning metadata. Editing it requires understanding the entire config file, and changes produce noisy diffs that obscure actual prompt improvements. Before: prompts: - id: 187c2f4b-c23d-4545-abef-73dc897d6b7b role: assistant content: >- hello hello Important:please provide the response in wrap text format... After: prompts: - id: rag_system_prompt_v1 role: assistant # Edit prompts in /prompts/rag_system.md — supports preview & versioning contentRef: ./prompts/rag_system.md version: "1.2.0" lastUpdated: "2025-01-15" Developer Impact: Prompts become first-class artifacts — reviewable in PRs, A/B testable, and editable without touching pipeline config.

Aman Sharma About 3 hours ago

💡 Feature Requests