I had just finished building the REST API that lets Claude upload blog posts programmatically. It worked on dev. Then I tried production — and hit a wall.
The bearer token was accepted, but the server-side Cognito authentication failed. After debugging, the root cause was embarrassingly simple: I had changed the service account password in the Amplify console, but the running server was still using the old value. Amplify bakes environment variables into the build artifact. To pick up the new password, I would need to redeploy the entire site.
That was the moment I decided to move credentials out of environment variables entirely.
The Blog API uses a two-layer authentication model. The caller sends a bearer token. The server validates it, then uses a Cognito service account (username + password) to get a JWT for AppSync writes. Three secrets in total:
API_BEARER_TOKEN — the shared secret between caller and serverCLAUDE_SERVICE_USERNAME — the Cognito service accountCLAUDE_SERVICE_PASSWORD — its passwordStoring these as Amplify environment variables had two problems:
Stale values. Amplify injects env vars at build time. Change a password in the console, and the running server still uses the old one until you redeploy. This is confusing and error-prone — especially when each branch has its own Cognito User Pool with its own service account.
No portable access. Other repositories that want to use the Blog API (like my AI streaming platform) need the bearer token. With env vars, the only way to share it was to copy-paste it into conversation. That defeats the purpose of having a programmable API.
Secrets Manager solves both problems. The server fetches credentials at runtime — not at build time — so changes take effect immediately. And any agent with AWS CLI access can retrieve the token from the same source.
One secret per Amplify branch, following a consistent naming convention:
personalsite/dev/blog-api
personalsite/main/blog-api
Each secret contains the same three fields:
{
"CLAUDE_SERVICE_USERNAME": "admin@myai4.co.uk",
"CLAUDE_SERVICE_PASSWORD": "...",
"API_BEARER_TOKEN": "..."
}
The service-auth.ts utility was updated to fetch credentials from Secrets Manager instead of process.env. The key design decisions:
Automatic branch detection. Amplify sets AWS_BRANCH automatically on every deployment. The server derives the secret name from it — branch dev reads personalsite/dev/blog-api, branch main reads personalsite/main/blog-api. No per-branch configuration needed.
Caching with TTL. Fetching from Secrets Manager on every API call would add latency and cost. The credentials are cached in memory for 5 minutes. This means a credential change takes at most 5 minutes to propagate — without any deployment.
Graceful fallback. When AWS_BRANCH is not set (local development), the server falls back to process.env. This means .env.local still works for local dev, and the Secrets Manager integration is completely transparent.
async function getSecrets(): Promise<BlogApiSecrets> {
if (cachedSecrets && Date.now() < cachedSecrets.expiresAt) {
return cachedSecrets.secrets;
}
const branch = process.env.AWS_BRANCH;
const secretName = branch ? `personalsite/${branch}/blog-api` : null;
if (secretName) {
const result = await secretsClient.send(
new GetSecretValueCommand({ SecretId: secretName })
);
const secrets = JSON.parse(result.SecretString!);
cachedSecrets = { secrets, expiresAt: Date.now() + 5 * 60 * 1000 };
return secrets;
}
// Fallback to process.env for local dev
return {
CLAUDE_SERVICE_USERNAME: process.env.CLAUDE_SERVICE_USERNAME || '',
CLAUDE_SERVICE_PASSWORD: process.env.CLAUDE_SERVICE_PASSWORD || '',
API_BEARER_TOKEN: process.env.API_BEARER_TOKEN || '',
};
}
The Amplify SSR Lambda needs permission to call secretsmanager:GetSecretValue. I added this to amplify/backend.ts by walking the CDK construct tree to find the SSR function — the same pattern already used for Bedrock permissions:
for (const node of allNodes) {
if (node.node.id.includes('ssrFunction') && 'addToRolePolicy' in node) {
(node as any).addToRolePolicy(
new PolicyStatement({
effect: Effect.ALLOW,
actions: ['secretsmanager:GetSecretValue'],
resources: [
`arn:aws:secretsmanager:eu-west-2:${account}:secret:personalsite/*/blog-api-*`,
],
})
);
break;
}
}
The resource ARN uses a wildcard for the branch name (personalsite/*/blog-api-*), so the same policy works for any branch without modification.
With credentials in Secrets Manager, I built a portable upload script that any agent can use from any repository. It fetches the bearer token automatically using the caller's AWS CLI session:
# Upload a single post to dev
./scripts/upload-post.sh --env dev blog_posts/01-building-the-foundation.md
# Upload all posts to production
./scripts/upload-post.sh --env main --all blog_posts/
The script reads --env to determine which secret to fetch, parses the markdown file's YAML frontmatter, constructs the JSON payload, and POSTs to the API. No tokens in shell history, no credentials in conversation, no manual curl.
| Before | After |
|---|---|
| Credentials in Amplify env vars | Credentials in Secrets Manager |
| Stale after console changes until redeployed | Live within 5 minutes, no deployment |
| Token shared via copy-paste | Token fetched via AWS CLI |
| Manual curl with bearer token | upload-post.sh handles everything |
validateBearerToken was synchronous | Now async (reads from cache/Secrets Manager) |
Amplify env vars are build-time, not runtime. This is not obvious from the documentation. If you need dynamic configuration, use Secrets Manager or Parameter Store.
AWS_BRANCH is your friend. Amplify sets this automatically on every build. Deriving resource names from it eliminates per-branch configuration entirely.
Cache aggressively, but with TTL. A 5-minute cache means near-zero latency overhead for Secrets Manager lookups, while still allowing credential rotation without deployment. The trade-off — up to 5 minutes of stale credentials after a change — is acceptable for a blog API.
This blog post was uploaded using the new system. The upload script fetched the bearer token from Secrets Manager, parsed the markdown, and created the draft — all in one command. No credentials were typed, pasted, or stored in any repository.
The portfolio's content pipeline is now fully self-service: write a post, run the script, review in admin, publish.
This post is part of a series documenting the development of arkadiuszkulpa.co.uk.