Modern Development Patterns for Automotive Software Teams Using AI ToolsA team of automotive software engineers using AI-powered development tools to build and test connected vehicle systems across ECUs and cloud infrastructure.
Industry Insights

/

May 30, 2025

/

#

Min Read

Modern Development Patterns for Automotive Software Teams Using AI Tools

This is an external post, click the button below to view.
View Post

As automotive software complexity continues to rise, engineering teams are turning to AI-assisted tools to improve developer efficiency, reduce errors, and maintain quality in safety-critical systems. Here at Sibros, we’ve adopted tools like Cursor IDE, Windsurf, Cline, Claude Code and other large language models (LLMs) to support development across our connected vehicle platform-from OTA updates to data logging and diagnostics.

In this article, we share practical patterns, lessons learned, and tips for integrating AI coding assistants into your workflow, particularly for embedded and cloud-based automotive software. These are not theoretical ideas but real, applied methods that we are incorporating into our daily workflows to build production systems. So can you.

Start With Structure: AI-Assisted Planning and Architecture

In automotive systems, mistakes made early in the design phase are expensive to fix later. This is especially true in tightly coupled services like OTA update orchestration or log streaming. With Cursor IDE, we begin all new features with extensive planning using AI to co-design our system.

We prompt the LLM to define service responsibilities, sketch domain models, and anticipate communication contracts.

We are building a deployment service for OTA software updates. Please outline:

  1. Service responsibilities
  2. Entity models and fields
  3. Service-to-service communication points
  4. Suggested folder structure for Go-based monorepo

Example Prompt:

We are building a deployment service for OTA software updates. Please outline: 1. Service responsibilities 2. Entity models and fields 3. Service-to-service communication points 4. Suggested folder structure for Go-based monorepo

This kind of upfront design reduces churn later and keeps the entire team aligned across architecture decisions.

Teaching the AI About Your Codebase With .cursorrules

Out-of-the-box, LLMs have no context about your specific repo. That's where .cursorrules files come in. Think of them as your project's AI onboarding doc.

We use them to define:

  • Our layered architecture
  • Naming conventions
  • Key files and their roles

Example snippet:

## Code Organization - Entity layer: Domain models - Service layer: Business logic - Storage layer: Postgres repositories - Endpoint layer: API definitions

This allows the AI to complete code that matches our style and avoids internal inconsistencies.

Shift Left on Safety: Test-First AI Workflows

Automotive code often touches critical functions-think firmware updates, brake controllers, or telemetry diagnostics. That means AI-generated code must be testable, predictable, and verifiable.

We use a "test-first" prompt approach: define what success looks like before asking the AI to write the logic.

Prompt to generate tests:

Write integration tests for the software rollout endpoint. Test cases: - Valid rollout with multiple staged deployments - Failure when package ID is missing - Rollout respects feature flag settings

This ensures the AI works within constraints and avoids unwanted behavior downstream.

Debugging at Scale: Error Message Injection

We no longer read full stack traces manually. Instead, we paste them directly into Cursor and ask for root cause analysis. This is especially useful in Go, where panics often cascade.

Prompt example:

Error: `panic: runtime error: invalid memory address or nil pointer dereference` This happens when trying to call `.Deploy()` inside the OTA rollout logic. Help me debug.

This happens when trying to call `.Deploy()` inside the OTA rollout logic. Help me debug.

The AI returns probable causes, suggests logging lines, and even offers fixes. This replaces what used to be 30 minutes of rubber-ducking and log sifting.

Avoiding LLM Drift With Git Hygiene

One common trap when working with LLMs is the tendency to pile fixes on top of broken code. This creates "cruft" - a term we use for layers of poorly grounded edits.

To avoid this:

  • We commit after every successful AI-generated change
  • We reset liberally with git reset --hard
  • We treat Cursor like a junior engineer: high potential, but needs oversight

This habit keeps code clean and minimizes the risk of shipping something subtly broken.

Refactoring With an AI Pair Programmer

Many AI users underutilize Cursor for refactoring. We use it proactively to reduce tech debt. Cursor is great at spotting long functions, repeated blocks, or inconsistent patterns.

Prompt:

Refactor `updateRolloutStatus` to: 1. Reduce nesting 2. Improve naming 3. Move DB logic to storage layer 4. Keep behavior identical (covered by existing tests)

This helps us modernize legacy code faster while maintaining confidence in test coverage.

Stack and Architecture Choices That Work Better With AI

We've learned that some frameworks and patterns are more AI-friendly than others. Cursor works best when the system is modular, typed, and documented.

We favor:

  • Go with layered clean architecture
  • TypeScript with React Hooks
  • Clearly defined service boundaries

Avoiding internal-only DSLs and untyped APIs increases the AI’s effectiveness significantly.

Documentation On Demand

AI doesn’t just write code-it writes docs, too. We use Cursor to generate internal documentation, API specs, and even architectural decision records (ADRs).

Prompt:

Generate API documentation for `POST /v1/rollouts`. Include: - Purpose - Request/response schema - Error codes - Authorization behavior

This reduces the manual overhead and keeps our internal docs up to date as features evolve.

Final Thoughts

These AI-assisted development patterns didn’t come from theory. They evolved from hundreds of hours spent building real automotive-grade systems. Our advice: treat the AI like a collaborator, not an oracle. Let it handle repetition and suggestion, but always layer human review, especially for safety-critical logic.

For connected vehicle developers looking to move faster without compromising reliability, adopting these patterns offers a real edge.

Xiaojian Huang
Xiaojian Huang
Xioajian leads Sibros software development and cloud infrastructure teams. Prior to Sibros he was head of product at Nuro.ai and before that was a key engineer at Uber and Facebook. At Uber he led the teams that built the cloud infrastructure running Uber’s critical workloads for the app worldwide.