/
May 30, 2025
/
#
Min Read
As automotive software complexity continues to rise, engineering teams are turning to AI-assisted tools to improve developer efficiency, reduce errors, and maintain quality in safety-critical systems. Here at Sibros, we’ve adopted tools like Cursor IDE, Windsurf, Cline, Claude Code and other large language models (LLMs) to support development across our connected vehicle platform-from OTA updates to data logging and diagnostics.
In this article, we share practical patterns, lessons learned, and tips for integrating AI coding assistants into your workflow, particularly for embedded and cloud-based automotive software. These are not theoretical ideas but real, applied methods that we are incorporating into our daily workflows to build production systems. So can you.
In automotive systems, mistakes made early in the design phase are expensive to fix later. This is especially true in tightly coupled services like OTA update orchestration or log streaming. With Cursor IDE, we begin all new features with extensive planning using AI to co-design our system.
We prompt the LLM to define service responsibilities, sketch domain models, and anticipate communication contracts.
We are building a deployment service for OTA software updates. Please outline:
Example Prompt:
This kind of upfront design reduces churn later and keeps the entire team aligned across architecture decisions.
Out-of-the-box, LLMs have no context about your specific repo. That's where .cursorrules files come in. Think of them as your project's AI onboarding doc.
We use them to define:
Example snippet:
This allows the AI to complete code that matches our style and avoids internal inconsistencies.
Automotive code often touches critical functions-think firmware updates, brake controllers, or telemetry diagnostics. That means AI-generated code must be testable, predictable, and verifiable.
We use a "test-first" prompt approach: define what success looks like before asking the AI to write the logic.
Prompt to generate tests:
This ensures the AI works within constraints and avoids unwanted behavior downstream.
We no longer read full stack traces manually. Instead, we paste them directly into Cursor and ask for root cause analysis. This is especially useful in Go, where panics often cascade.
Prompt example:
This happens when trying to call `.Deploy()` inside the OTA rollout logic. Help me debug.
The AI returns probable causes, suggests logging lines, and even offers fixes. This replaces what used to be 30 minutes of rubber-ducking and log sifting.
One common trap when working with LLMs is the tendency to pile fixes on top of broken code. This creates "cruft" - a term we use for layers of poorly grounded edits.
To avoid this:
This habit keeps code clean and minimizes the risk of shipping something subtly broken.
Many AI users underutilize Cursor for refactoring. We use it proactively to reduce tech debt. Cursor is great at spotting long functions, repeated blocks, or inconsistent patterns.
Prompt:
This helps us modernize legacy code faster while maintaining confidence in test coverage.
We've learned that some frameworks and patterns are more AI-friendly than others. Cursor works best when the system is modular, typed, and documented.
We favor:
Avoiding internal-only DSLs and untyped APIs increases the AI’s effectiveness significantly.
AI doesn’t just write code-it writes docs, too. We use Cursor to generate internal documentation, API specs, and even architectural decision records (ADRs).
Prompt:
This reduces the manual overhead and keeps our internal docs up to date as features evolve.
These AI-assisted development patterns didn’t come from theory. They evolved from hundreds of hours spent building real automotive-grade systems. Our advice: treat the AI like a collaborator, not an oracle. Let it handle repetition and suggestion, but always layer human review, especially for safety-critical logic.
For connected vehicle developers looking to move faster without compromising reliability, adopting these patterns offers a real edge.