Why?
Podping is an existing mechanism that allows podcast feed publishers to notify podcast apps that the feed has changed and should be re-polled. In December 2025, over 2.1 million podping url events were sent by podcast publishers large and small, including podcast hosting companies like Spreaker, Buzzsprout, RSS.com, Captivate, and Transistor. Event publishing is made as simple as possible thanks to Dave Jones hosting podping.cloud – five public servers that publishers can call with a single GET request whenever one of their feeds change. 99.87% of all podping url events were sent this way in December 2025.
Under the hood, though, these events are stored on a little-known blockchain called Hive, a fork of a little-known blockchain called Steem, and not to be confused with the publicly-traded company HIVE Digital of the same name. Publishers need "hive power" credits to publish, and podcast apps either need to listen to the hive api directly, or leverage someone else's proxy like my livewire.io websocket service in order to do so. Podping is thus (currently) tied to the fate of this blockchain fork. We need to get another backend up and running for podping ASAP, ready for the day when hive is no longer viable.
What?
It occurred to me that we could use atproto (Authenticated Transfer Protocol) as an alternative backend, creating a more isolated, approachable and distributed podping platform, using three off-the-shelf open-source building blocks from the atproto stack: self-hosted PDSes, Relays, and Jetstream.
In a nutshell:
- Podpings become strongly-typed public data records stored in dedicated PDSes, either self-hosted by the publisher (ideal, just like publishers host feeds themselves) or via a shared PDS host.
- Podping-specific Relays aggregate real-time events coming from multiple known podping PDSes into a single stream, and can be hosted by anyone.
- Podcast apps can either subscribe to a Relay (CBOR over websockets), or a corresponding Jetstream instance (JSON over websockets) for real-time events and history/catch-up. They can even run their own.
Some things we get for free with an atproto-based approach: no single point of failure, freely-available 3p tools and sdks, real-world open-source production services, unforgeable records (cryptographic verifiability), account migration (a publisher account is a keypair, can move to another server without losing data), public data (all atproto data is publicly-readable), simple operational architecture (http/websockets/sql) and podpings that are much closer to real-time (almost instant) than the current system (delayed by at least 30-40 seconds).
Bootstrap
To see if this concept made sense and get the ball rolling, I went through the process of self-hosting a vanilla PDS and a vanilla Relay, each on a $5/mo VPS. In fact, I am hosting a pair of them: one PDS and Relay dedicated to fully-mirroring every existing hive-based podping (current state), and one PDS and Relay enhanced with atproto-native podpings (future state).
Current state (pds1/relay1)
pds1.podping.at is a self-hosted PDS with one mirror atproto account repo for every existing hive account that has ever published a podping, backfilled with normalized records for all historical podpings, and continues to mirror them in real-time.
Each hive account (e.g. podping.aaa) has a corresponding atproto account repo and handle (e.g. h-podping-aaa.podping.at). Each repo contains strongly-typed collections of
at.podping.records.podping and at.podping.records.startup records.
Think of each atproto account repo as a sqlite database (because it is, under the hood), each collection as a table, and each record as a row in that table. Both types are defined with standard atproto lexicon schemas that can be looked up by their type ids:
-
The at.podping.records.podping record type includes all properties in the current podping schema version 1.1. Older versions are normalized for simplicity, but carry their original
versionproperty. - The at.podping.records.startup record type includes all properties found in the existing startup messages.
source property, with a pointer back to the original hive block, transaction, and operation index.
Since pds1.podping.at is a standard PDS, anyone can also subscribe to the "firehose" api (websocket-based) for real-time notification of new records created there.
And since pds1 hosts mirror accounts for every hive account, this effectively becomes a real-time view of hive-based podpings.
relay1.podping.at is a self-hosted Relay that subscribes to all hive mirror account repos on pds1. In general, a Relay aggregates the firehose API from one or more underlying PDSes and provides consumers with a single firehose.
The PDS and Relay firehose send CBOR as the websocket payload, an efficient binary serialization format to transport and store, but not so easy for new developers to handle.
To make things easier, I'm also hosting a corresponding Jetstream instance jet1.podping.at on the same VPS as the Relay.
Jetstream is another standard atproto service that translates a Relay's firehose into a more approachable JSON-based one: wss://jet1.podping.at/subscribe.
pds1 and relay1/jet1 are simply mirroring data coming out of the hive blockchain, they are subject to the same delays, and when hive goes away, the data stops flowing.
Future state (pds2/relay2)
pds2.podping.at is a separate self-hosted PDS that I'll use to help host anyone with a hive account that also wants to publish directly to atproto, ensuring podpings future in a world without hive.
Once deciding whether to self-host your own PDS or use mine, the process is pretty simple: call the com.atproto.repo.createRecord atproto xrpc api method (a single http POST call with a JSON payload) with a valid at.podping.records.podping record whenever one of your podcast feeds updates, ie at the same time you are currently making a call to write to hive.
If you have your own hive account today, get in touch and I'll walk you through your implementation. The nice thing about using vanilla atproto building blocks is that there are SDKs in almost every programming languge if you don't want to construct the http calls yourself.
relay2.podping.at is a separate self-hosted Relay that will subscribe to any native atproto podpings once they exist, falling back to the existing hive mirror repos when they don't.
Just like with relay1, I'm also hosting a corresponding Jetstream instance jet2.podping.at on the same VPS as relay2.
This is the relay that will realize the benefit of atproto's much quicker notification latency (almost instant) once there are native atproto publishers, and probably the one apps will want to use.
Publishers (Podcast hosting companies)
Consider hosting your own PDS and publishing native atproto podpings. This way you can own your podpings just like you own your feeds. I'll post my steps of how I installed my PDS server on a $5 Ubuntu VPS, which I based on the standard self-hosting guide here.
Alternatively, you can use mine (see pds2 above), get in touch and I'll help you get started. Either way, publishing native atproto podpings will make your podpings available more quickly to apps (almost instantly).
(pds installation docs to come)
Consumers (Podcast apps)
Use relay2 or jet2 to listen for podpings. Especially if you are my livewire.io websocket service now. It has a more robust schema, you can pick up where you left off with a cursorquery param, and you will see instant notifications once publishers start publishing atproto themselves. No authentication required.
cbor: websocat --base64 wss://relay2.podping.at/xrpc/com.atproto.sync.subscribeRepos
json: websocat wss://jet2.podping.at/subscribe
You can run your own relay and not rely on mine (see app-hosted relay in the diagram above).
(vanilla relay installation docs to come)
Cool
No harm in running the atproto backend and the hive backend in parallel for the forseeable future. If the atproto backend proves useful, we've insured ourselves for the day hive goes away.