Live audio analysis (coming soon)

Nabla offers 2 ways to analyse a stream of audio:

  • Via a websocket connection, where you send audio packets and Nabla sends back transcripts and medical items extracted from the audio.
  • Via an HTTP Live Streaming (HLS) endpoint. Nabla connects to it and returns extracted transcripts and medical facts to a webhook. In particular this is useful to connect Nabla to any video call provider supporting HLS format (like Vonage).

WebSocket connection

Please refer to the API reference for a more extensive definition of the websocket messages and expected format.

Using an HTTP Live Streaming endpoint

Setting the webhook URL to receive live analysis response

Before calling the Nabla HLS endpoint, you need to setup a webhook URL on which you wish to receive the responses. This can be done by an administrator in the "Webhooks" page of the Console Settings.

ImageImage

Broadcast the HLS audio

The live audio analysis endpoint returns asynchronously:

  • a transcription of the audio stream
  • medical facts detected in the audio stream

This endpoint takes an audio stream as input. Specifically, the live audio stream needs to be broadcasted using the HTTP Live Streaming (HLS) protocol.

Vonage example
If you are using Vonage for your video calls, you can start broadcasting a video call live using the following REST API. This method will give you the HLS URL of the video call stream.