At Tesla we believe that security and privacy are core tenets of any modern technology. Customers should be able to decide what data they share with third parties, how they share it, and when it can be shared. We’ve developed a decentralized framework: “Fleet Telemetry” that allows customers to create a secure and direct bridge from their Tesla devices to any provider they authorize. Fleet Telemetry is a simple, scalable, and secure data exchange service for vehicles and other devices.
Fleet Telemetry is a server reference implementation. The service handles device connectivity, receives, and stores transmitted data. Once configured, devices establish a websocket connection to push configurable telemetry records. Fleet Telemetry provides clients with ack, error, or rate limit responses.
Configuring and running the service
As a service provider you will need to register a publically available endpoint to receive device connections. Tesla devices will rely on a mutual TLS (mTLS) websocket to create a connection with the backend. The application has been designed to operate on top of kubernetes but you can run it as a standalone binary if you prefer.
Install on kubernetes with Helm Chart (recommended)
Please follow these instructions
Manual install (Skip this if you have installed with Helm on Kubernetes)
-
Allocate and assign a FQDN, this will be used in the server and client (vehicle) configuration.
-
Design a simple hosting architecture. We recommend: Firewall/Loadbalancer -> Fleet Telemetry -> Kafka.
-
Ensure mTLS connections are terminated on the Fleet Telemetry service.
-
Configure the Server
{
"host": string - hostname,
"port": int - port,
"log_level": string - trace, debug, info, warn, error,
"json_log_enable": bool,
"namespace": string - kafka topic prefix,
"reliable_ack": bool - for use with reliable datastores, recommend setting to true with kafka,
"monitoring": {
"prometheus_metrics_port": int,
"profiler_port": int,
"profiling_path": string - out path,
"statsd": { if you are not using prometheus
"host": string - host:port of the statsd server,
"prefix": string - prefix for statsd metrics,
"sample_rate": int - 0 to 100 percentage to sample stats,
"flush_period": int - ms flush period
}
},
"kafka": { //librdkafka kafka config, seen here: https://raw.githubusercontent.com/confluentinc/librdkafka/master/CONFIGURATION.md
"bootstrap.servers": "kafka:9092",
"queue.buffering.max.messages": 1000000
},
"kinesis": {
"max_retries": 3,
"streams": {
"V": "custom_stream_name"
}
},
"rate_limit": {
"enabled": bool,
"message_limit": int - ex.: 1000
},
"records": { list of records and their dispatchers, currently: alerts, errors, and V(vehicle data)
"alerts": [
"logger"
],
"errors": [
"logger"
],
"V": [
"kinesis",
"kafka"
]
},
"tls": {
"server_cert": string - server cert location,
"server_key": string - server key location
}
}
Example: server_config.json
- Deploy and run the server. Get the latest docker image information from docker hub. This can be run as a binary via
./fleet-telemetry -config=/etc/fleet-telemetry/config.json
directly on a server, or as a kubernetes deployment. Example snippet:
- Create and share a vehicle configuration with Tesla
“interval_seconds”: int – data polling interval in seconds
}…
},
“alert_types”: [ string list – alerts audiences that should be pushed to the server, recommendation is to use only “service” ]
}”>
{
"hostname": string - server hostname,
"ca": string - pem format ca certificate(s),
"fields": { map of field configurations
name (string) -> {
"interval_seconds": int - data polling interval in seconds
}...
},
"alert_types": [ string list - alerts audiences that should be pushed to the server, recommendation is to use only "service" ]
}
Example: client_config.json
Backends/dispatchers
The following dispatchers are supported
- Kafka (preferred): Configure with the config.json file. See implementation here: config/config.go
- Kinesis: Configure with standard AWS env variables and config files. The default aws credentials and config files are:
~/.aws/credentials
and~/.aws/config
.- By default stream names will be *configured namespace*_*topic_name* ex.: tesla_V, tesla_errors, tesla_alerts, etc
- Configure stream names directly by setting the streams config
"kinesis": { "streams": { *topic_name*: stream_name } }
- Override stream names with env variables: KINESIS_STREAM_*uppercase topic* ex.:
KINESIS_STREAM_V
- Google pubsub: Along with the required pubsub config (See ./test/integration/config.json for example), be sure to set the environment variable
GOOGLE_APPLICATION_CREDENTIALS
- Logger: This is a simple STDOUT logger that serializes the protos to json.
Metrics
Prometheus or a statsd interface supporting data store for metrics, this is required you should always monitor your applications.
Protos
Data is encapsulated into protobuf messages of different types. We do not recommend making changes but if you need to recompile them you can always do so with:
- Install protoc, currently on version 3.21.12: https://grpc.io/docs/protoc-installation/
- Install protoc-gen-go:
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.28
- Run make command
Unit Tests
To run the unit tests: make test
Common Errors:
~/fleet-telemetry➜ git:(main) ✗ make test
go build github.com/confluentinc/confluent-kafka-go/v2/kafka:
# pkg-config --cflags -- rdkafka
Package rdkafka was not found in the pkg-