Robotics From Zero
Module: Making Parts Talk

Message Types

How robot data is structured into typed messages — common message types, serialization, and why strong typing prevents bugs.

10 min read

Message Types

When a camera node publishes an image, what exactly does it send? A blob of bytes? A JSON object? A Protocol Buffer? The answer matters more than you might think.

Why Typed Messages?

Imagine two teams building a robot. Team A writes the camera driver. Team B writes the object detector. Without an agreed-upon message format:

  • Team A sends pixels as RGB, Team B expects BGR → colors are inverted
  • Team A sends width then height, Team B reads height then width → image is distorted
  • Team A adds a timestamp field, Team B doesn't expect it → parsing crashes

Typed messages solve this by creating an explicit contract between sender and receiver.

Common Robot Message Types

Here are the most frequently used message types across robot systems:

Message structure anatomy — exploded view of a Twist message with header and payload
Every message has two parts: a header (when and where) and a payload (the actual data). Here's a Twist message that commands a robot to drive forward at 0.5 m/s while turning.

Geometry Messages

Geometry primitives
class Vector3:
    x: float    # meters
    y: float
    z: float
 
class Quaternion:
    x: float
    y: float
    z: float
    w: float
 
class Pose:
    position: Vector3       # where (x, y, z)
    orientation: Quaternion  # which way it's facing
 
class Twist:
    linear: Vector3     # linear velocity (m/s)
    angular: Vector3    # angular velocity (rad/s)
 
class Transform:
    translation: Vector3
    rotation: Quaternion
Geometry primitives — visual catalog of Point, Vector3, Quaternion, Pose, and Twist
Five geometry types compose 95% of robot messages. Point is a location, Vector3 is a direction, Quaternion is a rotation, Pose combines position + orientation, and Twist combines linear + angular velocity.
Coordinate convention — X forward, Y left, Z up with robot reference frame
The standard robot coordinate convention: X points forward (red), Y points left (green), Z points up (blue). When a message says linear.x = 0.5, it means 'drive forward at 0.5 m/s'.

Sensor Messages

Sensor data types
class Image:
    header: Header          # timestamp + frame_id
    width: int              # pixels
    height: int             # pixels
    encoding: str           # "rgb8", "bgr8", "mono8", etc.
    data: bytes             # raw pixel data
 
class LaserScan:
    header: Header
    angle_min: float        # start angle (radians)
    angle_max: float        # end angle
    angle_increment: float  # angular resolution
    ranges: list[float]     # distance measurements
    intensities: list[float]  # signal strength
 
class Imu:
    header: Header
    orientation: Quaternion
    angular_velocity: Vector3
    linear_acceleration: Vector3
 
class PointCloud2:
    header: Header
    width: int
    height: int
    fields: list[PointField]  # x, y, z, intensity, etc.
    data: bytes               # packed point data
Image message — how width, height, encoding, and data fields map to actual pixel data
An Image message carries 640×480×3 = 921,600 bytes of pixel data per frame. Each pixel is 3 bytes (red, green, blue) with values from 0 to 255.

The Header

Almost every message includes a Header:

Message header
class Header:
    timestamp: Time     # when the data was captured
    frame_id: str       # coordinate frame ("base_link", "camera_optical")

The timestamp tells you when the data was captured (not when it was received — those can differ by milliseconds). The frame_id tells you what coordinate frame the data is in. We'll explore frames in detail in Module 3.

Tip

Always check the timestamp! If you're fusing data from multiple sensors, you need to synchronize them by timestamp. A LiDAR scan from 50ms ago and a camera image from now describe slightly different scenes if the robot is moving.

Serialization

Messages need to be converted to bytes for transmission and back again. This process is called serialization (to bytes) and deserialization (from bytes).

Common serialization formats:

FormatSpeedSizeHuman-Readable?Used In
CDRVery fastCompactNoDDS middleware
Protocol BuffersFastVery compactNogRPC, Google
FlatBuffersZero-copyCompactNoGame engines
JSONSlowLargeYesWeb APIs, debugging
MessagePackFastCompactNoVarious

For robotics, speed and size matter. A 640×480 RGB image is about 900KB. At 30fps, that's 27MB/s from just one camera. You don't want to add overhead with verbose formats like JSON.

Note

Zero-copy serialization means the receiver reads data directly from the sender's memory without creating a copy. This is critical for large messages like images and point clouds. Some high-performance robotics frameworks support zero-copy via shared memory.

Defining Your Own Messages

Sometimes standard messages aren't enough. You need to define custom types for your specific application:

Custom message definition
# A detected ball with position, color, and confidence
class DetectedBall:
    header: Header
    x: float            # pixel coordinate
    y: float            # pixel coordinate
    radius: float       # apparent size in pixels
    color: str          # "red", "blue", "green"
    confidence: float   # 0.0 to 1.0
    found: bool         # whether a ball was detected at all

Good practices for custom messages:

  • Include a Header — timestamps and frames are always useful
  • Use standard units — meters, radians, seconds (not inches, degrees, milliseconds)
  • Keep it minimal — include only data that subscribers actually need
  • Document fields — add comments explaining units and valid ranges

What's Next?

We've covered what messages are and how they're structured. But there's one more critical aspect of communication: speed. In the next lesson, we'll explore latency and real-time constraints — why some robot systems need to respond in microseconds, and what "real-time" actually means.

Got questions? Join the community

Discuss this lesson, get help, and connect with other learners on r/softwarerobotics.

Join r/softwarerobotics

Further Reading

Related Lessons

Discussion

Sign in to join the discussion.