Weather Tracker. Real implementation details behind the Azure containerized cloud application.
This page documents the actual build path used for the Weather Tracker project: FastAPI application code, structured logging, Azure Monitor telemetry, Docker packaging, Azure Container Registry publishing, Azure Container Apps deployment, GitHub Actions CI/CD, and Azure Key Vault integration using managed identity.
GitHub push → GitHub Actions → Docker build → ACR image push → Container Apps update → FastAPI runtime → WeatherAPI external call → logs/traces → Application Insights → Key Vault secret via managed identity
Explore the real implementation
These tabs document the code and commands used in the live project. Secrets are intentionally excluded or represented by variables, but the structure, file paths, commands, and workflow match the implementation.
FastAPI application entry point
The application exposes the web UI, city search, favourites actions, and health endpoint. Azure Monitor OpenTelemetry is configured only when the Application Insights connection string exists, keeping local development clean.
import os
from fastapi import FastAPI, Request, Form
from fastapi.responses import HTMLResponse, RedirectResponse
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from azure.monitor.opentelemetry import configure_azure_monitor
from app.config import Settings
from app.db.init_db import init_db
from app.services.weather_service import WeatherService
from app.services.favourites_service import FavouritesService
connection_string = os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING")
if connection_string:
configure_azure_monitor(connection_string=connection_string)
app = FastAPI(title="Weather Tracker Azure")
app.mount("/static", StaticFiles(directory="app/static"), name="static")
templates = Jinja2Templates(directory="app/templates")
weather_service = WeatherService()
favourites_service = FavouritesService()
@app.on_event("startup")
async def startup_event():
Settings.validate()
init_db()
@app.get("/", response_class=HTMLResponse)
async def home(request: Request):
favourites = favourites_service.get_all()
return templates.TemplateResponse(
"index.html",
{
"request": request,
"weather": None,
"error": None,
"searched_city": "",
"favourites": favourites,
},
)
@app.post("/search", response_class=HTMLResponse)
async def search_weather(request: Request, city: str = Form(...)):
favourites = favourites_service.get_all()
try:
weather = await weather_service.get_weather(city)
return templates.TemplateResponse(
"index.html",
{
"request": request,
"weather": weather,
"error": None,
"searched_city": city,
"favourites": favourites,
},
)
except Exception as ex:
return templates.TemplateResponse(
"index.html",
{
"request": request,
"weather": None,
"error": f"Unable to retrieve weather data: {str(ex)}",
"searched_city": city,
"favourites": favourites,
},
status_code=500,
)
@app.post("/favourites/add")
async def add_favourite(city_name: str = Form(...), country: str = Form(default="")):
favourites_service.add(city_name=city_name, country=country)
return RedirectResponse(url="/", status_code=303)
@app.post("/favourites/delete/{city_id}")
async def delete_favourite(city_id: int):
favourites_service.delete(city_id)
return RedirectResponse(url="/", status_code=303)
@app.get("/health")
async def health():
return {"status": "ok", "environment": Settings.APP_ENV}
Weather service with structured logging
The weather service calls WeatherAPI using httpx.AsyncClient. It logs request start, successful calls, latency, and HTTP failures so Azure Monitor can query and alert on application-level errors.
import time
import httpx
from app.config import Settings
from app.services.logging_service import log_error, log_info
class WeatherService:
BASE_URL = "https://api.weatherapi.com/v1/forecast.json"
async def get_weather(self, city: str, days: int = 3) -> dict:
params = {
"key": Settings.WEATHER_API_KEY,
"q": city,
"days": days,
"aqi": "no",
"alerts": "no",
}
start_time = time.perf_counter()
log_info("Weather request started", city=city, days=days)
try:
async with httpx.AsyncClient(timeout=15.0) as client:
response = await client.get(self.BASE_URL, params=params)
response.raise_for_status()
latency = round(time.perf_counter() - start_time, 2)
log_info(
"Weather request successful",
city=city,
days=days,
status_code=response.status_code,
latency_seconds=latency,
)
return response.json()
except httpx.HTTPStatusError as ex:
latency = round(time.perf_counter() - start_time, 2)
log_error(
"Weather API HTTP error",
city=city,
days=days,
status_code=ex.response.status_code,
latency_seconds=latency,
error=str(ex),
)
raise
except Exception as ex:
latency = round(time.perf_counter() - start_time, 2)
log_error(
"Weather API unexpected error",
city=city,
days=days,
latency_seconds=latency,
error=str(ex),
)
raise
Runtime configuration
The app uses environment variables for runtime configuration. This allowed the same codebase to run locally, in App Service, inside Docker, in ACI, and in Container Apps.
import os
from dotenv import load_dotenv
load_dotenv()
class Settings:
WEATHER_API_KEY = os.getenv("WEATHER_API_KEY", "")
APP_ENV = os.getenv("APP_ENV", "local")
DB_PATH = os.getenv("DB_PATH", "weather.db")
@classmethod
def validate(cls):
if not cls.WEATHER_API_KEY:
raise ValueError("WEATHER_API_KEY environment variable is required")
Structured logging helper
This logging helper produces consistent log messages with JSON context. Application Insights receives these logs as traces, making them searchable through KQL.
import json
import logging
from typing import Any
logger = logging.getLogger("weather-tracker")
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
formatter = logging.Formatter(
"%(asctime)s | %(levelname)s | %(name)s | %(message)s"
)
handler.setFormatter(formatter)
if not logger.handlers:
logger.addHandler(handler)
def _format_message(message: str, **kwargs: Any) -> str:
if not kwargs:
return message
return f"{message} | {json.dumps(kwargs, default=str)}"
def log_info(message: str, **kwargs: Any):
logger.info(_format_message(message, **kwargs))
def log_error(message: str, **kwargs: Any):
logger.error(_format_message(message, **kwargs))
KQL queries used in Application Insights
These queries were used to verify request telemetry, inspect structured logs, and confirm error traces before creating an alert rule.
requests
| order by timestamp desc
| take 20
traces
| where message contains "Weather request"
| order by timestamp desc
| take 20
traces
| where severityLevel >= 3
| where message contains "Weather API HTTP error"
| order by timestamp desc
| take 10
Azure Monitor scheduled query alert
The alert fires when Application Insights receives a weather API HTTP error trace. This simulates a production monitoring workflow for external dependency failures.
AI_ID=$(az monitor app-insights component show \
--app weather-tracker-ai \
--resource-group $RG \
--query id \
--output tsv)
az monitor scheduled-query create \
--name "alert-weather-api-errors" \
--resource-group $RG \
--scopes $AI_ID \
--description "Alert when Weather Tracker logs Weather API HTTP errors" \
--condition "count 'WeatherApiErrors' > 0" \
--condition-query WeatherApiErrors="traces | where severityLevel >= 3 | where message contains 'Weather API HTTP error'" \
--evaluation-frequency 5m \
--window-size 5m \
--severity 2
AG_ID=$(az monitor action-group show \
--name ag-weather-alerts \
--resource-group $RG \
--query id \
--output tsv)
az monitor scheduled-query update \
--name alert-weather-api-errors \
--resource-group $RG \
--action-groups $AG_ID
Dockerfile
The Dockerfile packages the FastAPI application into a portable Python 3.12 runtime. Gunicorn runs the application with Uvicorn workers, which is more production-appropriate than running the local development server.
FROM python:3.12-slim
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV APP_ENV=container
ENV APP_PORT=8000
ENV DB_PATH=/app/weather.db
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app ./app
EXPOSE 8000
CMD ["gunicorn", "-w", "2", "-k", "uvicorn.workers.UvicornWorker", "-b", "0.0.0.0:8000", "app.main:app"]
.dockerignore
The Docker context excludes local-only files, virtual environments, local databases, Git metadata, and deployment zip files. This keeps the image smaller and avoids copying secrets or unwanted state.
.venv
.git
.env
.env.save
*.db
*.sqlite3
app.zip
__pycache__
*.pyc
.pytest_cache
.vscode
Local container validation
The app was tested locally in Docker before being pushed to Azure. Port 8080:8000 was used during troubleshooting to avoid local port conflicts between WSL, Docker Desktop, and browser localhost routing.
docker build -t weather-tracker:local .
docker run --rm -p 8080:8000 \
--env-file .env \
-e APP_ENV=container \
weather-tracker:local
curl -i http://127.0.0.1:8080/health
Azure Container Registry
Azure Container Registry stores the Docker image used by Container Apps. The image was first manually pushed, then later automated through GitHub Actions.
ACR_NAME=acrweather17789
az acr create \
--resource-group $RG \
--name $ACR_NAME \
--sku Basic \
--location ukwest
ACR_LOGIN=$(az acr show \
--name $ACR_NAME \
--resource-group $RG \
--query loginServer \
--output tsv)
az acr login --name $ACR_NAME
docker tag weather-tracker:local \
$ACR_LOGIN/weather-tracker:v1
docker push $ACR_LOGIN/weather-tracker:v1
az acr repository list \
--name $ACR_NAME \
--output table
az acr repository show-tags \
--name $ACR_NAME \
--repository weather-tracker \
--output table
Azure Container Instances validation
ACI was used as a short-lived test runtime to prove the ACR image could run in Azure before moving to Azure Container Apps. It was deleted after validation to control cost.
az container create \
--resource-group $RG \
--name weather-tracker-aci \
--image $ACR_LOGIN/weather-tracker:v1 \
--os-type Linux \
--cpu 1 \
--memory 1 \
--registry-login-server $ACR_LOGIN \
--registry-username $(az acr credential show --name $ACR_NAME --query username -o tsv) \
--registry-password $(az acr credential show --name $ACR_NAME --query passwords[0].value -o tsv) \
--dns-name-label weathertracker$RANDOM \
--ports 8000 \
--environment-variables \
WEATHER_API_KEY=$WEATHER_API_KEY \
APP_ENV=azure-container \
DB_PATH=/app/weather.db
az container show \
--resource-group $RG \
--name weather-tracker-aci \
--query "{state:instanceView.state,restartCount:containers[0].instanceView.restartCount,currentState:containers[0].instanceView.currentState.state,fqdn:ipAddress.fqdn}" \
--output table
curl http://weathertracker11939.ukwest.azurecontainer.io:8000/health
az container delete \
--resource-group $RG \
--name weather-tracker-aci \
--yes
Azure Container Apps deployment
Container Apps is the final runtime for the project. It provides public HTTPS ingress, a consumption workload profile, and scale-to-zero behavior.
WORKSPACE_NAME=log-weather$RANDOM
az monitor log-analytics workspace create \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--location ukwest
LOG_ANALYTICS_ID=$(az monitor log-analytics workspace show \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--query customerId \
--output tsv)
LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--query primarySharedKey \
--output tsv)
ENV_NAME=env-weather$RANDOM
az containerapp env create \
--name $ENV_NAME \
--resource-group $RG \
--location ukwest \
--logs-workspace-id $LOG_ANALYTICS_ID \
--logs-workspace-key $LOG_ANALYTICS_KEY
az containerapp create \
--name weather-tracker-ca \
--resource-group $RG \
--environment $ENV_NAME \
--image $ACR_LOGIN/weather-tracker:v1 \
--registry-server $ACR_LOGIN \
--registry-username $(az acr credential show --name $ACR_NAME --query username -o tsv) \
--registry-password $(az acr credential show --name $ACR_NAME --query passwords[0].value -o tsv) \
--target-port 8000 \
--ingress external \
--cpu 0.5 \
--memory 1.0Gi \
--min-replicas 0 \
--max-replicas 2 \
--env-vars \
WEATHER_API_KEY=$WEATHER_API_KEY \
APP_ENV=azure-container-apps \
DB_PATH=/app/weather.db
az containerapp show \
--name weather-tracker-ca \
--resource-group $RG \
--query properties.configuration.ingress.fqdn \
--output tsv
curl https://weather-tracker-ca.purpleglacier-4ce16430.ukwest.azurecontainerapps.io/health
GitHub Actions CI/CD pipeline
The CI/CD workflow logs into Azure, logs into ACR, builds the Docker image, pushes it to ACR, and updates Azure Container Apps automatically whenever code is pushed to the main branch.
name: Build and Deploy to Azure Container Apps
on:
push:
branches:
- main
env:
IMAGE_NAME: weather-tracker
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login to Azure
uses: azure/login@v2
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Login to Azure Container Registry
run: |
docker login ${{ secrets.ACR_LOGIN_SERVER }} \
-u ${{ secrets.ACR_USERNAME }} \
-p ${{ secrets.ACR_PASSWORD }}
- name: Build Docker image
run: |
docker build \
-t ${{ secrets.ACR_LOGIN_SERVER }}/weather-tracker:latest .
- name: Push Docker image
run: |
docker push \
${{ secrets.ACR_LOGIN_SERVER }}/weather-tracker:latest
- name: Update Container App
run: |
az containerapp update \
--name weather-tracker-ca \
--resource-group rg-weather-tracker-dev-ukwest \
--image ${{ secrets.ACR_LOGIN_SERVER }}/weather-tracker:latest
GitHub repository secrets
These repository secrets were configured so GitHub Actions could authenticate securely without hardcoding credentials in the workflow file.
AZURE_CREDENTIALS = Full Azure service principal JSON
ACR_LOGIN_SERVER = acrweather17789.azurecr.io
ACR_USERNAME = acrweather17789
ACR_PASSWORD = First ACR admin password value
The values are not committed to source control. GitHub masks these values during pipeline execution.
Azure Key Vault creation and secret storage
Key Vault was added at the end of the project to remove plaintext secret handling from the Container App runtime configuration. The vault uses RBAC authorization.
KV_NAME=kv-weather-2969
az keyvault create \
--name $KV_NAME \
--resource-group $RG \
--location ukwest \
--enable-rbac-authorization true
USER_OBJECT_ID=$(az ad signed-in-user show --query id -o tsv)
KV_ID=$(az keyvault show \
--name $KV_NAME \
--resource-group $RG \
--query id \
-o tsv)
az role assignment create \
--assignee $USER_OBJECT_ID \
--role "Key Vault Secrets Officer" \
--scope $KV_ID
az keyvault secret set \
--vault-name $KV_NAME \
--name WEATHER-API-KEY \
--value "$WEATHER_API_KEY"
Managed identity and Key Vault reference
The Container App receives a system-assigned managed identity, gets read-only access to Key Vault secrets, and then references the secret from Container Apps configuration.
az containerapp identity assign \
--name weather-tracker-ca \
--resource-group $RG \
--system-assigned
CA_PRINCIPAL_ID=$(az containerapp show \
--name weather-tracker-ca \
--resource-group $RG \
--query identity.principalId \
--output tsv)
az role assignment create \
--assignee $CA_PRINCIPAL_ID \
--role "Key Vault Secrets User" \
--scope $KV_ID
WEATHER_SECRET_URI=$(az keyvault secret show \
--vault-name $KV_NAME \
--name WEATHER-API-KEY \
--query id \
--output tsv)
az containerapp secret set \
--name weather-tracker-ca \
--resource-group $RG \
--secrets weather-api-key=keyvaultref:$WEATHER_SECRET_URI,identityref:system
az containerapp update \
--name weather-tracker-ca \
--resource-group $RG \
--set-env-vars \
WEATHER_API_KEY=secretref:weather-api-key \
APP_ENV=azure-container-apps \
DB_PATH=/app/weather.db
curl https://weather-tracker-ca.purpleglacier-4ce16430.ukwest.azurecontainerapps.io/health
# Expected:
# {"status":"ok","environment":"azure-container-apps"}