Technical Specification

Integration Method

Consumers of Catalogue Events will need to expose a public HTTPS endpoint which accepts JSON via POST requests. The endpoint must be secured using TLS with a certificate signed by a trusted Certificate Authority (for end-to-end encryption). This integration method applies to both Standard Metadata and Enhanced Metadata (EM) event streams. Each metadata type requires its own endpoint, so you should configure one endpoint for Standard Metadata and a separate endpoint for Enhanced Metadata. Each request must also be verified using JWTs (see authentication). We authenticate the request in the same way for both event types, using the same key and the same JWT payload.

Note: You should expose one endpoint per metadata type. Separate staging and production endpoints are not supported for a given metadata type at the same time.

Acknowledgement

Your endpoint should respond with a 2XX status, within 10 seconds, on successful receipt of the request. Any other status(es) will be interpreted as a failure and the request will be retried 6 times over 12 hours. If the request is not successfully acknowledged after all retries, the information contained will be stored in an archive queue for manual re-processing at a later date. Multiple requests can be made simultaneously and as such your service must be able to handle concurrent calls.

Authentication

Authentication is implemented via JSON Web Tokens (JWTs) using asymmetric public/private key pairs. Requests from the Catalogue Events service include a signed token in the Authorization header.

  • Verification: Recipients must verify the token using the shared public key provided below.
  • Algorithm: RS256 (Asymmetric)
  • Expiration: Tokens carry a 5-minute TTL.
  • Consistency: The same key and JWT payload structure are used for both Standard and Enhanced Metadata requests.

Your application receiving Catalogue Events only needs to verify the token from the Authorisation Header using the provided public key. The additional example below is included for local development and testing.

Local testing and JWT flow example

The script below does the following steps:

  1. Generates an RSA key pair - for your testing purposes only. Only the public key will be shared for production usage and it will be static. You can see exact the shell commands here:
    ssh-keygen -t rsa -b 2048 -m PEM -f my-test-private.key
    openssl rsa -in my-test-private.key -pubout -outform PEM -out my-test-public-key.pub
  2. Signs a representative payload using the test private key (created in Step 1.) resulting in the generation of a JWT.
  3. Constructs a representative Authorisation Header for a request. This is performed by the Catalogue Events service when making requests to your endpoint.
  4. Verifies the JWT from the Authorisation Header using the test public key (created in Step 1.).

Why this is helpful:

  • For local development and testing of your application.
  • In your test or staging environment(s) you should use your own generated private/public key pairs when performing manual or automated testing.
  • For security purposes, we can never share the private key used by the Catalogue Events service, or even example tokens.
import { readFile } from 'node:fs/promises';
import { pathToFileURL } from 'node:url';
import { randomUUID } from 'node:crypto';
import { promisify } from 'node:util';
import { exec } from 'node:child_process';
import jwt from 'jsonwebtoken';

const signJWT = promisify(jwt.sign);
const execAsync = promisify(exec);

const myEndpoint = 'https://my-endpoint/path';

const simulateFullJWTAuthFlow = async () => {
  // Generate a test RSA key pair:
  // my-test-private.key
  // my-test-public-key.pub
  await generateRsaKeyPair();

  // Simulate how Catalogue Events generates a JWT for a given endpoint
  const { token } = await generateJwt(myEndpoint);

  // Create the request Authorization header
  const authHeader = `Bearer ${token}`;
  console.info('Authorization Header:', authHeader, '\n');

  // Simulate how a recipient would verify the JWT
  await verifyAuthHeader(authHeader, myEndpoint);
};
await simulateFullJWTAuthFlow();

//
//
//
//
//

async function generateRsaKeyPair() {
  await execAsync('ssh-keygen -t rsa -b 2048 -m PEM -f my-test-private.key');
  console.info('Generated private key: my-test-private.key\n');

  await execAsync('openssl rsa -in my-test-private.key -pubout -outform PEM -out my-test-public-key.pub');
  console.info('Generated public key: my-test-public-key.pub\n');
}

async function generateJwt(endpoint) {
  const privateKey = await readFile(pathToFileURL('./my-test-private.key'), 'utf-8');
  const payload = {
    'iss': 'Songtradr',
    'jti': randomUUID(),
    'aud': [endpoint]
    // iat: set automatically
    // exp: set automatically
  };

  console.info('JWT Payload:', payload, '\n');

  // Sign the JWT with the private key using
  // RS256 algorithm with a 5 minute expiration
  const token = await signJWT(payload, privateKey, { algorithm: 'RS256', expiresIn: '5m' });
  console.info('Generated JWT:', token, '\n');

  return { token };
}

async function verifyAuthHeader(authHeader, endpoint) {
  try {
    // When in production, swap the public key file for the one provided here
    // https://docs.massivemusic.com/docs/technical-specification-1#public-key
    const publicKey = await readFile(pathToFileURL('./my-test-public-key.pub'), 'utf-8');

    // Get the token from the Authorization header
    const [, tokenFromHeader] = authHeader.split(' ');

    // Verify the JWT including audience claim
    const decoded = jwt.verify(tokenFromHeader, publicKey, { algorithms: ['RS256'], audience: endpoint });
    console.info('Decoded JWT:', decoded);

    // For additional validation, you could check other claims here
    // e.g. check issuer is correct, and check jti against a store of used jtis to prevent replay attacks
  } catch (error) {
    console.error('Error verifying JWT:', error);
    throw error;
  }
}

Public Key

The actual Public Key your application should be using to verify requests.

-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuQG0FI2hqtCFRCR6vGuu
+4KR2n2/uoa3LFnierRfqF3/DyrU2984RxESqD7Rea13i19/V//R5gLnhhcj7Cbe
4PARqLd+Ei3xcTDWWPyjHy8Bv8MhWnMSu8XURugNUYfs/cJgtR3/vF5UeyAX8pjr
UygBK5QSDig7z+Vm65/m8n6QcwJVPalc6tH5dLK7MW+CXsxbBmhtvy/lVKutcW4m
HQGRxKGJr783YvsqDStDExe3flVZVcyEllO8teLexPyrvEpfjgNdH5h+08yZeajc
lvWlPQUEU4qszWIUY2UNT3adhdobACH/X59+cqFuJAbcARD2Vc4wXfp+GM7T28WF
awIDAQAB
-----END PUBLIC KEY-----

Examples

Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJTb25ndHJhZHIiLCJqdGkiOiJlYTJiZmMwNy02ZWZkLTQxOWQtOGUyMS02MTYyY2FhMDQ4MjciLCJhdWQiOlsiaHR0cHM6Ly9teS1lbmRwb2ludC9wYXRoIl0sImlhdCI6MTc1ODg4MzM3MSwiZXhwIjoxNzU4ODgzNjcxfQ.SIGNATURE
{
  "iss": "Songtradr",
  "iat": 1690804800,
  "exp": 1690805100,
  "jti": "some-uuid-v4",
  "aud": ["https://your-endpoint/path"]
}
import fs from 'fs';
import jwt from 'jsonwebtoken';

const publicKey = fs.readFileSync('./public.pem', { encoding: 'utf-8' });

// ...

const [, token] = req.headers.authorization.split(' ');

try {
  jwt.verify(token, publicKey, { algorithms: ['RS256'], audience: 'https://your-endpoint/path' });
} catch {
  // invalid token
}

Request Metadata

Use these rules when validating headers and parsing incoming Catalogue Events requests.

  • Content-Type: application/json
  • Content-Encoding: gzip (mandatory for both Standard Metadata and Enhanced Metadata streams)
  • Body format: a JSON array containing one or more events
  • Event types: standardMetadataRelease, standardMetadataTakedown, and enhancedMetadataTrack. Store all event types indefinitely to ensure data integrity and accuracy.
  • Payload Size: requests sent gzipped are typically small (approx. < 1MiB)
    • Note: There is no defined hard limit for request payload size; your endpoint must be prepared to support larger upper limits as necessary.
  • Observability Headers: both streams use Trace IDs and User Agents to support end-to-end logging and observability.

In practice, validate the request headers first, decompress the gzip body, then parse the payload as a JSON array of events before applying your event-processing logic.

Validation & Test Data

During your onboarding process, we will send test data to your endpoint. This may also be required to send the same events after you have integrated.

Requests containing test data will comply with the schemas and should be handled like any other request. However, the data itself should be disregarded. You can identify test events by the value of the root id property, which will be a negative integer. Upon initial integration, we will send test data to validate that your endpoint can handle:

  • Maximum Release: An event that includes all fields as defined in the schema.
  • Minimum Non-Takedown Release: An event with the minimal amount of fields as defined in the schema. All nullable fields will be null, and non-required fields will be left off.
  • Explicit Takedowns: An empty release with only the fields id, version, and takedown.
  • We will also test for incorrect authorization headers to ensure your endpoint can handle:
  1. Missing authorization headers
  2. Empty authorization headers
  3. Invalid authorization headers
  4. Expired authorization headers

Security Validation
Secure Endpoint: We confirm that the endpoint is secure, ensuring there are no misconfigurations with the TLS certificate. You will need to diagnose and address issues such as expired certificates, domain mismatches, or missing certificates if the endpoint fails this validation.

Load

We recommend being prepared to handle up to 400 requests per second (RPS). This rate may be reached when you are receiving backfills and any time we have to re-send failed events (such as if your endpoint goes down). We typically observe catalogue updates of around 1-2 RPS. The rate you will actually experience will be dependent on the size of the catalogue you have cleared, with the potential for higher rates during peak times (as determined by our suppliers). Please note there is currently no rate limiting in place.

Failed Messages

It is imperative that your catalogue remains up to date to ensure compliance with rightsholder requirements. For this reason, Catalogue Events requires 24/7 up-time from receiving services.

If you have experienced downtime or are scheduling downtime in advance, it is your responsibility to let us know if you believe you have failed or will fail to process any message(s).

If message(s) do fail, it is your responsibility to let us know why they have failed. Please ensure you remedy the reason for the failure(s) before requesting manual reprocessing of unsuccessful messages from your Client Success Manager. If you require further error information about specific messages, please send the Trace ID and Release ID of the message to your Client Success Manager.

Please note: Events from failed requests will be stored in the archive for up to 14 days. If a message falls out of the archive queue, a full backfill will be required to keep your catalogue up to date, adding additional load and processing on both your and our infrastructure.

Event Ingestion & Versioning

Events are triggered at the time of ingestion rather than when an action date is reached. Release dates and end/takedown dates will be published as soon as content is ingested. In many cases, this means dates will be published in advance of the action date, not on the action date. Access to stream audio will be managed in line with availability dates, i.e. you will not be able to stream content that hasn't reached the release date yet, or that has passed an end/takedown date.

Events are versioned to ensure consumers can ingest them in the correct order, should they be delivered out of sync or multiple times. The version is an integer. If an event is received more than once (i.e. two events with an identical version), the incoming update should be reprocessed.

Each event is a complete snapshot of a release, so any existing data relating to the release, its tracks, availability, etc, should be removed. There may be instances where the first event you receive for a release is a takedown.

You may experience intermittent periods where no Events are received. This will likely be due to scheduled ingestion downtime. We will queue up and gradually process any pending messages within the same day if this needs to occur. If you have any questions or concerns, please reach out to your Client Success Manager.

Handling Track and Release Download Availability

If you are a download service, please note that track-level download availability will be sent via Catalogue Events without the need for corresponding release-level download availability, for the same package and territory. The MassiveMusic APIs, however, only permit the download of tracks where both track and release download availability exist.

When using Catalogue Events data in conjunction with the Purchase or Download APIs, you must ensure you only allow purchases and downloads of tracks when the release is also available for download at that time, for the same country and package.

Trace IDs

Each request will have a TraceID header. These can only be linked back to requests which fail. Please send this and the Release ID through to your client success manager if you require support for a specific event.

songtradr-trace-id: <some-trace-id>