Galaxus reported freely accessible baby-camera recordings; The Verge reported roughly 1.1 million affected Meari devices. The important lesson is uncomfortable: a password is not enough if the platform behind it lets messages, images, or keys cross device boundaries. Timmy is not a claim of perfect security, but its pairing and media secrets are deliberately not placed inside a cloud-camera platform.
What appears to have failed in the Meari case
The public reports describe a white-label platform. Many visible brands sold cameras that depended on the same Meari/CloudEdge infrastructure. That is why the incident matters: when a shared platform draws an authorization boundary incorrectly, the result is not one weak camera, but a fleet-level exposure.
The reported pattern goes beyond weak default passwords. Sources describe MQTT messages without sufficient per-device subscription controls, publicly reachable image URLs, weak image obfuscation, and static or app-extractable keys. That is a platform failure: infrastructure could reveal data that should never have been available to a different account.
| Cloud-camera risk | Timmy's counter-design |
|---|---|
| The backend stores or distributes image events. | Timmy has no cloud archive for nursery images; media is live WebRTC. |
| A broker or bucket must perfectly authorize every device. | Firestore carries only pairing and signaling data; SDP/ICE is encrypted before it is written. |
| Static keys can affect an entire fleet. | Each pairing creates its own P-256 ECDH-derived key on the devices. |
| A relay path may be mistaken for media access. | TURN forwards encrypted SRTP packets but does not receive media keys. |
How Timmy creates the secret
The four-character Timmy code is intentionally not the secret. In code, it is only a rendezvous point: the app derives a meetingKey from it so both devices can find the same Firestore public-key exchange. Private ECDH keys never leave the devices.
Both devices then compute the same P-256 ECDH shared secret. The pairing key is derived locally. The two-digit SAS is derived from the shared secret plus both sorted public keys. If that key exchange is tampered with, the devices display different numbers, which tells users not to confirm the pairing.
sequenceDiagram
participant Baby as Baby device
participant Firestore as Firestore meeting point
participant Parent as Parent device
participant Turn as TURN relay
Baby->>Baby: Generate P-256 ECDH keypair
Parent->>Parent: Generate P-256 ECDH keypair
Baby->>Firestore: Write public key only under meetingKey
Parent->>Firestore: Write public key only under meetingKey
Firestore-->>Baby: Parent public key
Firestore-->>Parent: Baby public key
Baby->>Baby: Compute sharedSecret + SAS
Parent->>Parent: Compute sharedSecret + SAS
Baby-->>Parent: Humans compare SAS on both screens
Baby->>Firestore: Write SDP/ICE encrypted with AES-256-GCM
Parent->>Firestore: Write SDP/ICE encrypted with AES-256-GCM
Baby-)Turn: WebRTC media as DTLS/SRTP packets
Turn-)Parent: Relay forwards encrypted packets
Note over Turn: TURN sees network metadata, not media keys
Simplified Timmy security chain: Firestore is rendezvous and signaling transport; TURN is only a relay; media stays WebRTC-encrypted.
Why WebRTC media cannot be silently watched
WebRTC is not just "send video." Before media flows, the devices perform a DTLS handshake. SRTP keys for audio and video are derived from that secure transport. The media packets are then encrypted as SRTP. A TURN server can forward those packets, but it does not receive the keys needed to open the audio or video.
Timmy adds a layer before that: signaling data such as SDP offers, SDP answers, and ICE candidates is encrypted with AES-256-GCM before it reaches Firestore. Firestore helps devices negotiate; it is not where cleartext video, audio, or cleartext signaling is meant to live.
What Timmy still does not claim
No serious baby monitor should claim to be unhackable. If a phone is compromised, any app can be attacked. A malicious app build changes the risk model. Server configuration must remain correct. Timmy's narrower claim is architectural: it avoids backend-readable nursery media artifacts and makes the security-critical pairing logic inspectable in the public core project.
Questions to ask about any baby camera
- Does the vendor store images or clips?
- Are media URLs private, short-lived, and authorized per device?
- Are keys generated per device or pairing rather than static inside an app?
- Can a broker only deliver messages for the device you actually own?
- Does pairing let a human notice a man-in-the-middle attempt?
Read the code
- ECDH and SAS in Timmy Core
- Meeting key, document key, and AES-GCM
- Firestore rules for sessions and pairing
- Security documentation in the core project
Sources
- Recordings from baby cameras freely accessible · Galaxus
- A million baby monitors and security cameras were easily viewable by hackers · The Verge
- nobody puts baby in a corner · Sammy Azdoufal
- Parent guide for the same incident · Baby Monitor Timmy