DeepSeek-R1 has been announced on GitHub Models as well as on Azure AI Foundry, and the goal of this sample AI application is to demonstrate how to use it with OpenAI SDK and .NET in Azure AI Foundry and GitHub Models.
For a detailed step-by-step on how to use DeepSeek-R1 on GitHub Models and Microsoft Extensions for AI, check thr blog post Build Intelligent Apps with .NET and DeepSeek R1 Today!. GitHub Models are easier to use (you just need a GitHub token, no Azure subscription required).
In this sample, we will use Azure AI Foundry and GitHub Models with DeepSeek-R1, both platforms uses same model and infrastructure.
- Features
- Architecture diagram
- Prerequisites
- Run solution
- Resources
- Guidance
- Resources
This is a sample chat demo that showcases the capabilities of DeepSeek-R1.
- There is a main Blazor web app that manages the chat interaction.
- The demo is orchestrated using .NET Aspire.
- The app also supports configuration settings to allow you to use your own DeepSeek-R1 deployment in Azure.
- The app shows the response from the reasoning model, and also the reasoning process that lead the model to that conclusion.
Below is a sample animation of the application running:
For more information, visit the DeepSeek-R1 documentation.
WIP - Coming soon!
Make sure the following tools are installed:
- .NET 9
- Git
- Azure Developer CLI (azd)
- VS Code or Visual Studio
- If using VS Code, install the C# Dev Kit
Run the BlazorChatDeepSeekR1.AppHost project:
cd ../BlazorChatDeepSeekR1.AppHost
dotnet run
The sample uses Azure AI Foundry with DeepSeek-R1, the most advanced model which needs advanced GPU and memory resources. You need to
Deploy the solution to azure with the steps:
-
Login to Azure:
azd auth login
-
Provision the AIServices resource with a DeepSeek-R1 deployment:
azd up
-
The deploy process should take around 10 minutes. Once the deployment is complete, you will see the 2 URLs in the output.
-
Open the Aspire dashboard, and open the chat application.
To get the endpoint and API key for the Azure AI Foundry deployment, you must follow these steps:
-
Open the Azure portal and navigate to the created resource group.
-
Open the Azure AI Services Model deployment, these resource should start with the name
deepseekr1-...
. -
Click on open in Azure AI Foundry
-
Select
Deployments
from the tree. -
Select the deployment for the DeepSeek-R1 model.
-
In the model details, you will find the endpoint and the API key.
-
Copy these values and apply them to the settings in the chat application.
-
*Note: The endpoint url can be something similar to this:
https://<sample-deploy-name>.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview
. The value that must be pasted in the settings is:https://<sample-deploy-name>.services.ai.azure.com/models/
The OpenAI client is created in the BlazorChatDeepSeekR1.Chat
project, in the Program.cs
file.
The function private void CreateChat()
is responsible for creating the OpenAI client. It validates the use of ApiKey and Endpoint, and creates the OpenAI client. If no ApiKey is provided, the client will use the default Azure Credentials.
Here is a simplified version of the code:
private void CreateChat()
{
// read the settings from the configuration
endpoint = Configuration["endpoint"] ?? throw new ArgumentNullException("Endpoint");
apiKey = Configuration["apikey"] ?? string.Empty;
deploymentName = Configuration["deploymentname"] ?? "DeepSeek-R1";
// validate if apiKey is provided and initialize the client
if (string.IsNullOrEmpty(apiKey))
{
// Using DefaultAzureCredential for Managed Identity with models in Azure AI Foundry
var options = new DefaultAzureCredentialOptions();
var credential = new DefaultAzureCredential(options);
client = new ChatCompletionsClient(
endpoint: new Uri(endpoint),
credential: credential
);
}
else
{
// Using ApiKey works for models in Azure AI Foundry and GitHub Models
client = new ChatCompletionsClient(endpoint: new Uri(endpoint), credential: new AzureKeyCredential(apiKey));
}
}
The flow is as follows:
graph TD
A[Start] --> B[Read settings from configuration]
B --> C{Is apiKey provided?}
C -->|Yes| D[Initialize ChatCompletionsClient with ApiKey]
C -->|No| E[Initialize DefaultAzureCredential]
E --> F[Initialize ChatCompletionsClient with DefaultAzureCredential]
D --> G[End]
F --> G
B --> H[Handle exceptions]
H --> G
For Azure OpenAI Services, pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. The majority of the Azure resources used in this infrastructure are on usage-based pricing tiers. However, Azure Container Registry has a fixed cost per registry per day.
You can try the Azure pricing calculator for the resources:
- Azure OpenAI Service: S0 tier, gpt-4o-mini and text-embedding-ada-002 models. Pricing is based on token count. Pricing
- Azure Container App: Consumption tier with 0.5 CPU, 1GiB memory/storage. Pricing is based on resource allocation, and each month allows for a certain amount of free usage. Pricing
- Azure Container Registry: Basic tier. Pricing
- Log analytics: Pay-as-you-go tier. Costs based on data ingested. Pricing
- Azure Application Insights pricing is based on a Pay-As-You-Go model. Pricing.
azd down
.
Samples in this templates uses Azure OpenAI Services with ApiKey and Managed Identity for authenticating to the Azure OpenAI service.
The Main Sample uses Managed Identity for authenticating to the Azure OpenAI service.
Additionally, we have added a GitHub Action that scans the infrastructure-as-code files and generates a report containing any detected issues. To ensure continued best practices in your own repository, we recommend that anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled.
You may want to consider additional security measures, such as:
- Protecting the Azure Container Apps instance with a firewall and/or Virtual Network.