Sunday, January 4, 2026

From the Ground Up: Building an actual website - Part 1: The Idea and Authentication

Photo by <a href='/photographer/groble-50361'>groble</a> on <a href='/'>Freeimages.com</a>
I've been studying industrial-scale websites for a long time now. I've been reading about them, working on them, and trying to understand them that whole time. But something has always been missing. It feels like I understand parts of them, but not the whole thing at once. Like the blind men and the elephant. Maybe understanding real industrial-scale websites is too much for one person, but I still believe I can do better. I've started to believe the only way to learn them the way I want to is to try building one myself.

The Idea

When I worked at a large midwestern chemical information company I adopted a chemical lookup program. The group I worked in needed quick, fast lookup for commonly used chemicals in the documentation we were writing. The program I adopted used a simple spreadsheet to store the chemical list, and used a simple UI to search the list.

When I adopted that I expanded it into a full three-tier application with a web UI frontend, a compiled middle tier, and a fully normalized database backend. It was ridiculously overengineered for the demands place on it (I don't think anybody but me ever used it), but it was a good way to test concepts like database normalization, three-tier development, web calls, and so on.

I wanted to continue with that, but since I no longer work for this company and was contemplating web development and deployment I am leery about taking their list of synonyms and search terms and publishing them for the world to access. I don't think this application has any commercial value whatsoever, but it's probably better to be safe than sorry.

So what if the users came up with their own substances and search terms? What if a small drug development company wanted a way to store and manage access to a list of chemicals, with each development team uploading their own drug targets, abbreviations, and properties? So that's the idea I want to develop. MyOrg has been born!

Let's start from the very beginning (a very good place to start)

All the security blogs and speakers say the place to think about security, authentication, and authorization is at the start and during the whole lifetime of a project. Since this is an area about which I know very little, I decided to start with that.

At another previous employer I had been tasked with implementing OAuth2 authorization with Github. Since I knew something about that, and Github is widely used and known by the developer community, I decided to do Github-based authentication. My goals for authentication were that as much work as possible be done by the backend. Some frontend code is necessary to kick off the process and for the user to authorize the Github app to allow access by MyOrg, but the backend should do the token exchange and manage the results of authentication. Also, based on a blog I read a year or so back I don't want to be sending Github tokens back and forth between the backend and frontend, so after the initial authentication with Github I decided to have the backend generate a certificate-signed JWT and use that for interactions after that.

Github says their Github Apps are preferred to Oauth apps because of their finer-grained control of permissions and allowed activities, so that's the route I took. I created two versions of the app, one for local development and one for production. Since I'm only using them for authentication I allowed them to request only minimal access to users' accounts. I made note of their client ids and secrets. For the local app I set the callback url to the localhost url and port the app will be running at (https://localhost:7055/auth/callback). I'll describe the production url later.

I created a new C# ASP.NET application with the minimal API and wrote an Auth endpoint and service for it. Then I wrote the following methods to handle a user's initial request:


public static class AuthEndpoints
{
    public static void MapAuthEndpoints(this IEndpointRouteBuilder app)
    {
        var group = app.MapGroup("/auth");

        group.MapGet("/login", ([FromQuery] string origin, [FromServices] IAuthService auth) =>
        {
            var url = auth.GetLoginRedirectUrl(origin);
            return Results.Redirect(url);
        });
        ...
    }
}

public class AuthService : IAuthService
{
    private const string State = "abc123";
    ...
    
    public string GetLoginRedirectUrl(string origin)
    {
        string enhancedState;
        if (origin == null || origin == string.Empty)
        {
            enhancedState = State;
        }
        else
        {
            var originState = Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(origin));
            enhancedState = $"{State}+{originState}";
        }

        var q = HttpUtility.ParseQueryString(string.Empty);
        q.Add("client_id", _github.ClientId);
        q.Add("redirect_uri", _github.RedirectUri);
        q.Add("state", enhancedState);
        q.Add("allow_signup", "false");

        var theUrl = $"{_github.AuthUrl}/authorize?{q}";
        _logger.LogDebug("Redirect URL: {URL}", theUrl);
        return theUrl;
    }
}
I made all these variables available to the backend as environment variables. In development I put the insensitive values (client id, redirect uri, auth url, etc) in the environment via appsettings.Development.json and the sensitive client secret in dotnet's user-secrets. Then I used dotnet's configuration process to parse them into an object that gets injected into the services that need them.

For frontend reasons I needed to know the origin URL the login request was made from, so I separate that out as soon as the request is made. To survive the roundtrip to Github and back I concatenate that with a State variable (hardcoded to "abc123" currently) and send it as a query variable.

The callback and token exchange process took me a while to work out. The basic process is fairly easy:


    public static void MapAuthEndpoints(this IEndpointRouteBuilder app)
    {
        var group = app.MapGroup("/auth");

        ...

        group.MapGet("/callback", async (
            [FromQuery] string code,
            [FromQuery] string state,
            [FromServices] IAuthService auth,
            HttpContext ctx) =>
        {
            var pair = state.Split("+");
            var stateString = pair[0];
            var originString = pair.Length > 1 ? pair[1] : Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes("http://nowhere.com/nothing"));
            var origin = System.Text.Encoding.UTF8.GetString(Convert.FromBase64String(originString));
            var user = await auth.HandleCallbackAsync(code, stateString);
            return Redirect($"http://localhost:5173/dashboard?token={user}");
        }
    }
...
    public async Task HandleCallbackAsync(string code, string state)
    {
        if (state != State)
        {
            throw new UnauthorizedAccessException("State mismatch");
        }

        var data = new Dictionary
        {
            ["client_id"] = _github.ClientId,
            ["client_secret"] = _github.ClientSecret,
            ["code"] = code,
            ["redirect_uri"] = _github.RedirectUri,
        };

        using var response = await _http.PostAsync($"{_github.AuthUrl}/access_token", new FormUrlEncodedContent(data));
        response.EnsureSuccessStatusCode();

        var responseBody = await response.Content.ReadAsStringAsync();
        var queryParams = HttpUtility.ParseQueryString(responseBody);
        var accessToken = queryParams["access_token"];

        if (string.IsNullOrEmpty(accessToken))
        {
            throw new UnauthorizedAccessException("Failed to retrieve Github token");
        }

        using var userRequest = new HttpRequestMessage(HttpMethod.Get, $"{_github.ApiUrl}/user");
        userRequest.Headers.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
        foreach (var (key, value) in _github.RequiredHeaders)
        {
            userRequest.Headers.Add(key, value);
        }

        using var userResponse = await _http.SendAsync(userRequest);
        userResponse.EnsureSuccessStatusCode();

        var user = await userResponse.Content.ReadFromJsonAsync()
            ?? throw new InvalidOperationException("Failed to parse Github user.");
        var authenticatedUser = GenerateUserWithJwt(user);
        return authenticatedUser;
    }
I check to make sure the state matches the value expected, and then construct the response Github is expecting for the token exchange. When the result comes back, if the access token is not null, I immediately query Github for the user record of the person making the request, parsing that into a User record I wrote.

This is the process to create a signed JWT with a local certificate that I borrowed from still another previous client:


    private User GenerateUserWithJwt(User user)
    {
        var cert = GetCertificateFromStore(_jwt.Thumbprint)
            ?? throw new InvalidOperationException("Signing certificate not found.");
        var key = new X509SecurityKey(cert);
        var creds = new SigningCredentials(key, SecurityAlgorithms.RsaSha256Signature);

        var claims = new[]
        {
            new Claim("LE-User-Name", user.Name ?? string.Empty),
            new Claim("LE-User-Login", user.Login ?? string.Empty),
            new Claim("LE-Company", user.Company ?? string.Empty),
        };

        var token = new JwtSecurityToken(
            _jwt.Issuer,
            _jwt.Audience,
            claims,
            expires: DateTime.UtcNow.AddDays(1),
            signingCredentials: creds
        );

        return new User
        {
            Login = user.Login ?? string.Empty,
            Name = user.Name ?? string.Empty,
            Url = user.Url,
            Company = user.Company ?? string.Empty,
            OrganizationsUrl = user.OrganizationsUrl,
            SiteAdmin = user.SiteAdmin,
            Jwt = new JwtSecurityTokenHandler().WriteToken(token),
        };
    }

    private static X509Certificate2? GetCertificateFromStore(string thumbprint, StoreName storeName = StoreName.My)
    {
        using var certStore = new X509Store(storeName, StoreLocation.LocalMachine);
        certStore.Open(OpenFlags.ReadOnly);
        var certs = certStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false);

        if (certs.Count == 0)
        {
            // this is for local testing. I'm guessing there is a better way to do this?
            using var userStore = new X509Store(storeName, StoreLocation.CurrentUser);
            userStore.Open(OpenFlags.ReadOnly);
            certs = userStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false);
        }

        return certs.Count == 0 ? null : certs[0];
    }
I'm pretty sure this wouldn't work in a cloud environment, nor on a non-Windows setup, but I decided I could come back to it eventually. This worked locally on my work PC and was enough to get me going.

To kick off the authorization call from a frontend I just make a call to the backend:


const url = new URL('http://localhost:5164/auth/login')
const urlString  = url.toString()

const doLogin = () => {
  window.location.href = urlString
}
The Redirect return in the backend along with Vue routing took care of loading the correct page when login was successful.

I didn't like this for a couple of reasons. Ironically, the part I thought would be hardest to fix was actually the easiest, and the part I thought would be pretty straightforward was the hardest to get past.

The first is the use of a local certificate store for signing the JWT. Fortunately, moving to Azure and setting some values in the startup process and a KeyVault pretty much took care of it. I'll describe this more in a couple more minutes.

Second was the hardcoded Redirect in the backend to take care of the successful login path. I didn't think the backend should know that much about the structure of the frontend application and should be able to return a more general value, like just a json document with the credentials. Unfortunately, that just wasn't possible. It was the cause of my biggest, most passionate argument with ChatGPT to date. However, I lost that battle. It seems, because this is going through a web interface, the backend has to return some sort of HTML-like stuff for the browser to accept it. Any textual data, like json, will simply be displayed in the browser, which is not at all what I want. So what I finally worked out was returning a simple HTML document with a call to a presumed function in the calling webpage, which I further assumed will have opened a popup window to perform the login process.


    group.MapGet("/callback", async (
        {
            ...
            var userJson = JsonSerializer.Serialize(user);
            return Results.Content($@"
                <html>
                    <body>
                        <script>
                            window.opener.postMessage({userJson}, '{origin}')
                            console.log('postMessage sent!')
                        </script>
                    </body>
                </html>
            ", "text/html");
        });

const frontendOrigin = window.location.origin
const url = new URL(`${baseUrl}/auth/login?origin=${encodeURIComponent(frontendOrigin)}`)
const urlString  = url.toString()

const doLogin = () => {
  const width = 600, height = 700
  const left = (screen.width - width) / 2
  const top = (screen.height - height) / 2

  const handleMessage = (event: MessageEvent) => {
    if (event.origin !== baseUrl) {
      return
    }

    if (!event.data || typeof event.data !== 'object') {
      return
    }

    const user = event.data as User

    if (!user) {
      return
    }

    try {
      auth.init(JSON.stringify({ ...user }))
    } catch (err) {
      console.error('[auth] init failed', err)
    } finally {
      clearTimeout(to)
      window.removeEventListener('message', handleMessage)
      try { popup?.close() } catch {}
      router.push('/dashboard')
    }
  }

  window.addEventListener('message', handleMessage)

  const popup = window.open(
    urlString,
    '_blank',
    `width=${width},height=${height},top=${top},left=${left}`
  )

  if (!popup) {
    window.removeEventListener('message', handleMessage)
    console.warn('[auth] popup blocked')
    return
  }

  const timeoutMs = 2 * 60 * 1000
  const to = setTimeout(() => {
    console.warn('[auth] auth message timeout; removing listener')
    window.removeEventListener('message', handleMessage)
    try { popup.close() } catch {}
  }, timeoutMs)
}
With a valid JWT in hand, it's pretty easy to require it to query your endpoints. All it needs is some settings in Program.cs and a method call on the endpoints to be secured. There can also be custom requirements on the JWT, such as the presence of a User field.

// Program.cs
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddJwtBearer(options =>
{
    var jwtOptions = builder.Configuration.GetSection(JwtTokenOptions.Jwt).Get() ?? throw new InvalidOperationException("Signing certificate not found.");
    var cert = SecretsService.LoadCertificate(jwtOptions);
    options.TokenValidationParameters = new TokenValidationParameters
    {
        IssuerSigningKey = new X509SecurityKey(cert),
        ValidateIssuer = true,
        ValidateIssuerSigningKey = true,
        ValidateAudience = true,
        ValidateLifetime = true,
        ValidAudience = builder.Configuration["JwtTokenOptions:Audience"],
        ValidIssuer = builder.Configuration["JwtTokenOptions:Issuer"],
    };
});
builder.Services.AddAuthorizationBuilder()
    .AddPolicy("IsUser", policy => policy.RequireClaim("LE-User-Login").Build());
...
app.UseAuthorization();

// SearchEndpoints.cs
public static class SearchEndpoints
{
    public static void MapSearchEndpoints(this IEndpointRouteBuilder app)
    {
        app.MapGet("/search", async (
                [FromServices] ISearchService searchService,
                [FromQuery(Name = "st")] string[] searchTerms,
                CancellationToken cancellationToken) =>
                await searchService.Search(searchTerms, cancellationToken)
            )
            .RequireAuthorization();
    }
}
So this worked locally. The next challenge is to deploy it and get it to work in the cloud.

Ship it!

I decided to deploy the app to Azure since I know less about it and it's what my current client is using. I've worked in Azure before and was certified in it at one point, but again, what I know is mostly theoretical, so I decided to make it more concrete by deploying there.

Since Microsoft's acquisition of Github that seems to be where they're devoting most of their recent development. Again, however, since I'm less familiar with it, I decided to implement this fully in Azure DevOps, right down to the source code repository and the task board to manage my own work.

Microsoft provides starting templates for building and deploying common types of projects, so I just took and adapted one of them. I added it to my repository in Azure DevOps, adjusted the values to suit my project, committed it, and then it was available to pull down locally.

From the descriptions on Azure it sounded like an app service was what I needed, so that's what I wrote the yml file to deploy to. Interestingly, the app service needs to exist before you can deploy to it. So I went to Azure, created a resource group to hold all the artifacts I would need, and created the app service. You also need a Service Connection between the app service and Azure DevOps, so I followed the steps in ADO to do that. That was enough to get the code and the app out to Azure, but I still needed to fix the configuration and certificate generation problem.

Adding non-sensitive configuration values to the app was a simple matter of adding them to the runtime environment. That can be done in Azure in the app service page. Select Settings, Environment variables, and add the keys and values you need there. Use a double underscore to mimic sections in environment variable settings in ASP.NET, e.g., GithubOptions:ClientId becomes GithubOptions__ClientId. Now when the app host starts those values are available to the app.

I created a KeyVault to hold the certificate and the Github ClientSecret. Creating the vault was easy, but I quickly ran into a frustring aspect of Azure: I couldn't add the values to the vault I had just created! I have come to learn that I, as a developer, am not Microsoft's primary customer. My almost ultimate boss, the CTO of my organization is, so all of Microsoft's products are geared to him or her, not to me. Even though it makes perfect sense to me to be able to add stuff to a resource I just created, that's probably not how most tech organizations work. The overworked and always-busy owner of the Azure resources may respond to my request to create a KeyVault, but will probably not be the one who adds values to it. Minimal access to the Nth degree. So, immediately after creating the KeyVault, I had to turn around and look up the way to give myself permission to add values to it. That was an easy Google/ChatGPT search, and then I was able to add the Github ClientSecret to it. I decided to ask KeyVault to generate a signing certificate rather than trying to do it myself and upload it to it, but that route is possible to.

The code changes needed to take advantage of the KeyVault were fairly simple. I added the KeyVault name and SecretName (just the name of the certificate, not really a secret) to configuration. Then I modified Program.cs to access the KeyVault:


if (!builder.Environment.IsDevelopment())
{
    var vaultName = builder.Configuration["JwtTokenOptions:KeyVaultName"];
    var kvUri = new Uri($"https://{vaultName}.vault.azure.net/");
    builder.Configuration.AddAzureKeyVault(kvUri, new DefaultAzureCredential());
}
With that, the Github ClientSecret just appeared in the app's environment. Loading the certificate required a modification to the code to load it:

public static class SecretsService
{
    public static X509Certificate2 LoadCertificate(JwtTokenOptions options)
    {
        var client = new SecretClient(
            new Uri($"https://{options.KeyVaultName}.vault.azure.net/"),
            new DefaultAzureCredential());
        var secret = client.GetSecret(options.KeyVaultSecretName);
        var pfxBytes = Convert.FromBase64String(secret.Value.Value);
        return X509CertificateLoader.LoadPkcs12(pfxBytes, null);
    }
}
I think there was something else in there about giving the app service permission to access the KeyVault, but that was a simple permission and configuration change in Azure. Bu after that the deployed version of the app worked too!

So now I have an app that allows authentication to Github, generates a signed JWT, and requires that JWT to query endpoints on the backend. Now I can get on with the fun stuff.

Source code

The working repository for this project is in Azure DevOps, and is private by default. I haven't found a way to change that after creation, so I am mirroring the source code files to Github.

No comments:

Post a Comment