.Make sure compatibility with a number of frameworks, including.NET 6.0,. Web Platform 4.6.2, and.NET Criterion 2.0 as well as above.Lessen dependencies to avoid version disputes and the necessity for binding redirects.Transcribing Sound Information.Among the main performances of the SDK is audio transcription. Creators can translate audio documents asynchronously or in real-time. Below is actually an example of how to record an audio documents:.using AssemblyAI.using AssemblyAI.Transcripts.var customer = brand new AssemblyAIClient(" YOUR_API_KEY").var records = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For nearby files, identical code may be used to obtain transcription.wait for making use of var flow = new FileStream("./ nbc.mp3", FileMode.Open).var transcript = await client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK likewise sustains real-time sound transcription making use of Streaming Speech-to-Text. This component is actually particularly valuable for uses calling for urgent handling of audio data.utilizing AssemblyAI.Realtime.await making use of var scribe = new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Last: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for receiving audio from a microphone as an example.GetAudio( async (piece) => wait for transcriber.SendAudioAsync( part)).wait for transcriber.CloseAsync().Using LeMUR for LLM Functions.The SDK combines along with LeMUR to permit programmers to create big foreign language version (LLM) apps on vocal information. Right here is an example:.var lemurTaskParams = brand new LemurTaskParams.Prompt="Deliver a short conclusion of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var response = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Intelligence Styles.Additionally, the SDK possesses integrated support for audio intelligence designs, enabling belief review as well as various other advanced functions.var records = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// GOOD, NEUTRAL, or even downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To read more, visit the main AssemblyAI blog.Image source: Shutterstock.