SecureShare Hub — Engineering Deep Dive.
This page walks through the full implementation behind SecureShare Hub—from Python Functions and Terraform modules to GitHub Actions pipelines, KQL queries, and the complete malware-scanning workflow that brings the architecture to life as a secure Azure workload.
Explore the implementation.
Switch between Python Functions, Terraform IaC, CI/CD pipelines, observability queries, and the malware scanning lifecycle to see how each part of SecureShare Hub is built.
Python – SAS issuance Function
The download Function is the final gate before users receive a file. It validates identity from Easy Auth, checks that the target blob has been scanned and marked clean, and then issues a short-lived user delegation SAS URL for direct Blob Storage download.
import datetime
import azure.functions as func
from azure.storage.blob import (
BlobServiceClient,
generate_blob_sas,
BlobSasPermissions,
)
from azure.identity import DefaultAzureCredential
def get_user_from_principal(req: func.HttpRequest) -> str:
# Parsed from X-MS-CLIENT-PRINCIPAL header (Easy Auth)
principal_header = req.headers.get("X-MS-CLIENT-PRINCIPAL")
# In the real implementation this is decoded + validated; simplified here.
return principal_header or "unknown"
def is_clean_file(container: str, blob_name: str) -> bool:
# In the real project this checks blob metadata (scanStatus=clean)
# via BlobServiceClient. Here it's represented as a stub.
return True
def main(req: func.HttpRequest) -> func.HttpResponse:
user = get_user_from_principal(req)
container = "safe-files"
blob_name = req.params.get("fileName")
if not blob_name:
return func.HttpResponse("fileName is required.", status_code=400)
if not is_clean_file(container, blob_name):
return func.HttpResponse("File is not in a clean state.", status_code=403)
credential = DefaultAzureCredential()
account_url = "https://<storage-account-name>.blob.core.windows.net"
blob_service = BlobServiceClient(account_url, credential=credential)
# Request a user delegation key valid for ~30 minutes
delegation_key = blob_service.get_user_delegation_key(
datetime.datetime.utcnow(),
datetime.datetime.utcnow() + datetime.timedelta(minutes=30),
)
# Generate a short-lived SAS (e.g. 15 minutes)
sas_token = generate_blob_sas(
account_name="<storage-account-name>",
container_name=container,
blob_name=blob_name,
user_delegation_key=delegation_key,
permission=BlobSasPermissions(read=True),
expiry=datetime.datetime.utcnow() + datetime.timedelta(minutes=15),
)
sas_url = f"{account_url}/{container}/{blob_name}?{sas_token}"
# In the real project this event is also logged for audit.
return func.HttpResponse(sas_url)
Terraform – storage, containers & Event Grid
Terraform builds the storage account, private containers, an Event Grid system topic, and a subscription that wires BlobCreated events to the Scan Function. The goal is a repeatable, auditable baseline.
resource "azurerm_storage_account" "secureshare" {
name = "secureshare<random>"
resource_group_name = azurerm_resource_group.main.name
location = azurerm_resource_group.main.location
account_tier = "Standard"
account_replication_type = "GZRS"
allow_blob_public_access = false
}
resource "azurerm_storage_container" "incoming_raw" {
name = "incoming-raw"
storage_account_name = azurerm_storage_account.secureshare.name
container_access_type = "private"
}
resource "azurerm_storage_container" "safe_files" {
name = "safe-files"
storage_account_name = azurerm_storage_account.secureshare.name
container_access_type = "private"
}
resource "azurerm_storage_container" "quarantine" {
name = "quarantine"
storage_account_name = azurerm_storage_account.secureshare.name
container_access_type = "private"
}
resource "azurerm_eventgrid_system_topic" "blob_events" {
name = "secureshare-blob-events"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
source_arm_resource_id = azurerm_storage_account.secureshare.id
}
resource "azurerm_eventgrid_system_topic_event_subscription" "scan_function" {
name = "scan-on-upload"
scope = azurerm_eventgrid_system_topic.blob_events.id
event_delivery_schema = "EventGridSchema"
included_event_types = ["Microsoft.Storage.BlobCreated"]
azure_function_endpoint {
function_id = azurerm_function_app.scan.id
}
}
CI/CD – GitHub Actions with OIDC
The pipeline uses OpenID Connect to log in to Azure – no client secrets stored in GitHub. One job applies Terraform, another publishes the Function App. This mirrors a production ready, passwordless setup.
name: deploy-secureshare
on:
push:
branches: [ "main" ]
permissions:
id-token: write # required for OIDC
contents: read
jobs:
infra:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Azure login (OIDC)
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Terraform apply
run: |
terraform init
terraform apply -auto-approve
app:
runs-on: ubuntu-latest
needs: infra
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Azure login (OIDC)
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Deploy Function App
run: |
npm install -g azure-functions-core-tools@4 --unsafe-perm true
func azure functionapp publish $FUNCTION_APP_NAME
KQL – auditing SAS issuance & downloads
Log Analytics and Application Insights capture both SAS generation events and download requests. These sample queries show how to answer “who downloaded what, when, and from where?” in seconds.
// SAS issuance by user
AppTraces
| where Message has "Issued SAS for file"
| project TimeGenerated,
UserId = tostring(customDimensions.userId),
FileName = tostring(customDimensions.fileName),
CorrelationId = tostring(customDimensions.correlationId)
| order by TimeGenerated desc
// Download traffic by user and client IP
AppRequests
| where Name == "GET /download"
| summarize Downloads = count() by
UserId = tostring(customDimensions.userId),
ClientIP,
bin(TimeGenerated, 1h)
| order by TimeGenerated desc
Malware pipeline – before & after states
Files are never served directly from uploads. They move through an event-driven pipeline that scans
content, promotes clean files into safe-files, and isolates suspicious content in
quarantine. Metadata such as scanStatus tells the SAS Function what is safe
to serve.
// Before scanning
Container BlobName scanStatus
----------- ------------------ ----------
incoming-raw invoice-Q1.zip pending
// After clean scan
Container BlobName scanStatus
----------- ------------------ ----------
safe-files invoice-Q1.zip clean
// After failed scan
Container BlobName scanStatus
----------- ------------------ ----------
quarantine invoice-Q1.zip infected