Wednesday, August 13, 2025

Publishing New Stock Prices


The next step in the process is to enable the sending of updates from the cloud to our webapp.

This is essentially very simple, but as with everything else, we need to put in significant effort to wire it up.

But there is a very cheap way of sending an update to the browser, which is to use the apigatewaymanagementapi subcommand of the aws command line tool. This is always a little weird to me because the mental model I have of a websocket is analogous to a socket, and there is a lambda on the far end. This is not true, of course.

The websocket is owned by APIGateway, and so anybody that wants to send anything down the socket needs to contact APIGateway and send a request. As we saw last time, for this to work they need two things:
  • connectivity to the APIGateway;
  • the "execute-api:ManageConnections" permission on the appropriate resource.
Fortunately, the gateway is out on the open internet and my "admin" user has the permission. So I can just run this command:
AWS_PROFILE=ziniki-admin aws apigatewaymanagementapi post-to-connection --data '"Hello from API Gateway!"' --endpoint-url https://n2n2psybtd.execute-api.us-east-1.amazonaws.com/development --connection-id "O_nmJfJWoAMCE9w="
And if I look in the console of my browser, I see a message like this:
MessageEvent {isTrusted: true, data: '"Hello from API Gateway!"', origin: 'wss://n2n2psybtd.execute-api.us-east-1.amazonaws.com', lastEventId: '', source: null, …}
I know what you're thinking: how did that work and where did that connection-id come from? (Actually, you're probably only wondering one of those).

The connection-id is a key which tells APIGateway which of the many websockets it's currently holding onto to send the message to. Each websocket is automatically given this identifier with every incoming request. You can use it to respond as we did in the last episode, or you can store it to send asynchronous messages (as we'll do in the next episode). For simplicity, I sent it over with the data in the response from the lambda, and just pulled it out of there to put it in the command line now.

Doing this Programmatically

I just sent a trivial message here, which was displayed, but did not have the appropriate action. With a little bit of thought better could be done from the command line. But we are going to skip that step and move straight onto doing it with another API Gateway and another lambda. The idea here is that we can send new prices for one or more stocks from the command line (using curl) and then they will be automatically updated and the connections notified. For now, we are going to provide the connection ids; the final step (next time) will be to use neptune and dynamo to coordinate all this.

For some reason, when using APIGateway, you have to configure either a websocket gateway or an http gateway, so we can't reuse our existing gateway but need to declare a new one. We could reuse the same lambda, but that doesn't seem like a good choice to me right now, so we are going to declare another one of those as well. Obviously (since it will need access to Neptune) it needs to be in the same VPC. And since it will want to send messages to the websockets, we need the IPv6 "dualstack" property. Even though the new API Gateway won't be handling websockets, it seems to make sense to make that IPv6 compatible as well.

So we add this to our infrastructure.dply file:
Now we "duplicate" everything for the flow to publish new prices.  Things are
not quite the same, but close.

        lambda.function "publish-lambda" => publish_lambda
            @teardown delete
            Runtime <- "go"
            Code <= aws.S3.Location
                Bucket <- "ignorance-bucket.blogger.com"
                Key <- "lambda-publish.zip"
            Role <= aws.IAM.WithRole "ignorance-lambda-role"
                assume
                    allow "sts:AssumeRole"
                        principal "Service" "lambda.amazonaws.com"
                policy
                    allow aws.action.ec2.CreateNetworkInterface "*"
                    allow aws.action.ec2.DescribeNetworkInterfaces "*"
                    allow aws.action.ec2.DeleteNetworkInterface "*"
                    allow "logs:CreateLogGroup" "*"
                    allow "logs:CreateLogStream" "*"
                    allow "logs:PutLogEvents" "*"
                    allow "execute-api:ManageConnections" "arn:aws:execute-api:*:*:*/development/*/@connections/*"
            PublishVersion <- true
            Alias <- "next"
            VpcConfig <= aws.VPC.Config
                DualStack <- true
                Subnets <- vpc->subnets
                SecurityGroups <- vpc->securityGroups

We want to work through the alias, which was just created, so find that specifically

        find aws.Lambda.Alias "next" => publishAlias
            FunctionName <- "publish-lambda"

Now we need to define a websocket API Gateway to access this lambda:

        api.gatewayV2 "stock-publish"
            @teardown delete
            Protocol <- "http"
            IpAddressType <- "dualstack"
            integration "lambda"
                Type <- "AWS_PROXY"
                Uri <- publishAlias->arn
                PayloadFormatVersion <- "2.0"
            route "$default" "lambda"
            stage "development"

NEPTUNE_PUBLISH:neptune/dply/infrastructure.dply

This is basically the same as for the watch lambda except we are configuring an HTTP gateway rather than a websocket one.

So we just need to implement a lambda which does the publication for us. This is similar in a lot of ways to the watch lambda, but it has a different prototype for the handle function because it's an HTTP handler, not a websocket handler.

Here's the prologue ceremony:
package main

import (
    "context"
    "encoding/base64"
    "fmt"
    "log"
    "net/url"
    "strconv"
    "strings"

    "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
    "github.com/gmmapowell/ignorance/neptune/internal/client"
)

NEPTUNE_PUBLISH:neptune/lambda/publish/main.go

Starting at the end, main() identifies the actual handler function:
func main() {
    lambda.Start(handleRequest)
}

NEPTUNE_PUBLISH:neptune/lambda/publish/main.go

So now we can move on to actually writing the handler:
var sender *client.Sender

func handleRequest(ctx context.Context, event events.APIGatewayV2HTTPRequest) (events.APIGatewayV2HTTPResponse, error) {
    if sender == nil {
        sender = client.NewSender("n2n2psybtd.execute-api.us-east-1.amazonaws.com", "development")
    }

    formData, r, err := readForm(&event)
    if r != nil {
        return *r, err
    }

    quotes, r, err := buildQuotes(formData)
    if r != nil {
        return *r, err
    }

    // a hack right now; will be replaced with neptune
    connIds := formData["connId"]
    for _, connId := range connIds {
        log.Printf("have quotes %v; sending to %s\n", quotes, connId)
        sender.SendTo(connId, quotes)
    }

    resp := events.APIGatewayV2HTTPResponse{StatusCode: 200, Body: ""}
    return resp, nil
}

NEPTUNE_PUBLISH:neptune/lambda/publish/main.go

Note that to make this work, I have refactored the watch lambda to extract all the relevant methods to the internal packages. In reality, files in lambda packages should be treated exactly the same as those in main packages: the minimum amount of code should go there. I broke this rule with the watcher because I wanted to get something to work. But even so, I feel more of both of these lambda main functions should be extracted into internal.

The sender is an instance variable which is the client responsible for communicating with the websocket(s). The Sender is the name of the abstraction we created when refactoring. Note that we are not using the name of this api gateway but the one that serves the websockets. Blind copying of the watcher code would lead to us passing in the current domain name. Obviously, in the "real" world, this would not be hardcoded; probably it would be an environment variable attached to the lambda definition in the deployer file, but this is just a demo, so let's press on.

There are obviously decisions to be made about how we indicate the variables to be passed in the HTTP request; I have decided to use form data; this would be a good fit for an old-style web form, but is also compatible with AJAX requests and, most importantly for us, is easy to use with curl.

Once we have the form data, we can build this into a list of Quote objects, which are the same objects we already used with the watcher, which have already been set up to encode into JSON.

We then retrieve the list of connection ids from the form data, and then for each connection id we post the quotes to the management gateway with that id.

Finally, we return a status code of 200 to indicate success with no message (yes, we possibly should use 204).

This code obviously depends on supporting functions which are dull processing, but for completeness here they are:
func readForm(event *events.APIGatewayV2HTTPRequest) (url.Values, *events.APIGatewayV2HTTPResponse, error) {
    method := event.RequestContext.HTTP.Method
    if method != "POST" {
        log.Printf("request was not POST but %s\n", method)
        return nil, &events.APIGatewayV2HTTPResponse{StatusCode: 400, Body: "must use POST"}, nil
    }

    contentType := event.Headers["content-type"]
    if !strings.Contains(contentType, "application/x-www-form-urlencoded") {
        log.Printf("content type did not say it was a form but %s\n", contentType)
        return nil, &events.APIGatewayV2HTTPResponse{StatusCode: 400, Body: "must use content type application/x-www-form-urlencoded"}, nil
    }

    body := event.Body
    if event.IsBase64Encoded {
        decodedBody, err := base64.StdEncoding.DecodeString(body)
        if err != nil {
            log.Printf("error decoding base64: %v\n", err)
            return nil, &events.APIGatewayV2HTTPResponse{StatusCode: 500, Body: "decoding base64 failed"}, err
        }
        body = string(decodedBody)
    }

    // Parse the form data
    formData, err := url.ParseQuery(body)
    if err != nil {
        log.Printf("error parsing body as query: %v\n", err)
        return nil, &events.APIGatewayV2HTTPResponse{StatusCode: 500, Body: "parsing failed"}, err
    }

    return formData, nil, nil
}

func buildQuotes(formData url.Values) ([]client.Quote, *events.APIGatewayV2HTTPResponse, error) {
    tickers := formData["ticker"]
    prices := formData["price"]
    if len(tickers) != len(prices) {
        log.Printf("mismatched tickers and prices: %d %d\n", len(tickers), len(prices))
        return nil, &events.APIGatewayV2HTTPResponse{StatusCode: 400, Body: "mismatched tickers and prices"}, nil
    }

    var quotes []client.Quote
    for i, t := range tickers {
        ps := prices[i]
        p, err := strconv.Atoi(ps)
        if err != nil {
            log.Printf("could not parse %s as a number\n", ps)
            return nil, &events.APIGatewayV2HTTPResponse{StatusCode: 400, Body: fmt.Sprintf("not a number: %s", ps)}, err
        }
        quotes = append(quotes, client.Quote{Ticker: t, Price: p})
    }

    return quotes, nil, nil
}

NEPTUNE_PUBLISH:neptune/lambda/publish/main.go

We can now send through an update message like so:
curl -vk -HContent-Type:application/x-www-form-urlencoded -d ticker=AAPL -d price=2400 -d ticker=GOOG -d price=31300 -d connId=PCjUkcrRIAMCI-w= https://uxfcmjy8e7.execute-api.us-east-1.amazonaws.com/development
This appears in our browser window.

Conclusion

We have been able to configure everything to publish updated stock prices using an APIGateway and a Lambda. Now we just need to couple everything up to Neptune.

Tuesday, August 12, 2025

Accessing Services from a VPC Lambda

We reached the point where I claim that we have all the code we need in order to have our stock watcher update from the lambda. But nothing is working because there is no connectivity between the lambda and the API Gateway (in that direction; there is connectivity between the Gateway and the lambda which enables it to be invoked).

There are a number of solutions to this problem:
  • There are some services (such as cloudwatch, which we are successfully using; also neptune and dynamodb) which "just work";
  • There are some services (such as SSM, the secrets manager) which have custom endpoints you can identify in your lambda code;
  • It is possible to define a NAT gateway;
  • It is possible to use AWS PrivateLink;
  • It is possible to use IPv6 and an egress-only gateway.
Each of these has issues. The first two are simply not solutions for the API Gateway. The third and fourth both cost money, not only when you use them, but whenever you have them defined. For me, this goes against the whole principle of serverless, and seems like a rip-off: AWS makes you put certain lambdas in VPCs, and then doesn't allow you to mix that with other AWS services, and then charges you a standing fee to solve the problem. The fifth solution, then, is what we're left with, but that requires us to mess with IPv6.

It took AWS forever to implement IPv6. I forget why I first wanted to use IPv6 on AWS, but it was back in 2012 or 2013. I was amazed that it wasn't an option. I was even more amazed that it wasn't on their roadmap. If I read the histories correctly, it was only introduced in 2023.

I don't fully understand these things, but the reason you need a NAT gateway from a VPC is because you are using "local" IPv4 addresses. There are no "local" IPv6 addresses - they are all unique. As such, it is not necessary to translate them to global addresses, and hence no NAT gateway is needed. Instead, there is a component called an "egress-only" gateway which supports IPv6 only and has the significant advantage that it is completely free.

All of my VPC setup is off-camera, so I can just tell you that I created an egress only internet gateway and then enabled IPv6 on all the subnets in my VPC.

That's a good start, but it's not enough. In order for any of this to work, both the lambda and the API gateway also need to support IPv6. Each of them has an option to enable "dual stack" processing - that is, both IPv4 and IPv6. For the lambda, it is part of the VpcConfig setting:
            VpcConfig <= aws.VPC.Config
                DualStack <- true
                Subnets <- vpc->subnets
                SecurityGroups <- vpc->securityGroups

NEPTUNE_WATCH_LAMBDA:neptune/dply/infrastructure.dply

And for the API Gateway, we need to specify the IpAddressType at the top level:
        api.gatewayV2 "stock-watch"
            @teardown delete
            Protocol <- "websocket"
            IpAddressType <- "dualstack"
            RouteSelectionExpression <- "$request.body.action"

NEPTUNE_WATCH_LAMBDA:neptune/dply/infrastructure.dply

It may not be obvious why we need this in the API Gateway, but the IpAddressType property is really there to enable the gateway to provide its web services to IPv6 clients. We are not really doing that, but the mechanism is the same: you make an HTTP request to the gateway from within the lambda. Since that request is travelling "over" IPv6, we need the gateway to be listening to IPv6 traffic.

I'm going to admit that much of this still confuses me. I think I understand IPv6 addressing reasonably enough, but I have always been confused by "multi-homed" machines with multiple network cards. How does it know which to choose? Particularly when there are so many (at least 6 - two protocols for each of three subnets). But it does, somehow, and it works. And doesn't require me to pay for a NAT gateway.

Conclusion

The new networking options in AWS make it possible to use IPv6 to have a free route from within a VPC to the wider internet, including AWS services. It's a little complicated, but hopefully this article makes it clear what needs to be done for this case at least.

Delivering Prices from a Lambda


Step two is to assemble all the infrastructure in the cloud to deliver prices. For now, we are just going to deliver one price, statically, on connection, but, believe me, it's the principle of the thing that counts.

Websocket Quote Listener

The first thing we're going to do here is to add a JavaScript listener. Basically, we're going to copy the outline of the code we wrote for the mock listener, open a websocket and connect that to the watcher.
class WebsocketStockQuoter {
    constructor(wsuri) {
        this.conn = new WebSocket(wsuri);
        this.conn.onmessage = msg => {
            console.log(msg);
            if (this.lsnr) {
                var data = JSON.parse(msg.data);
                if (data.action && data.action === 'quotes') {
                    this.lsnr.quotes(data.quotes)
                }
            }
        }
        this.conn.onopen = () => {
            console.log("telling lambda who we are - expect quotes after this");
            this.conn.send(JSON.stringify({"action":"user","id":"user003"}));
        }
        this.conn.onclose = err => {
            console.log("closing because", err);
        }
    }

    provideQuotesTo(lsnr) {
        this.lsnr = lsnr
    }
}

export { WebsocketStockQuoter }

NEPTUNE_WEBSOCKET_QUOTER:neptune/app/js/webstockquoter.js

The console.log statements are there for the debugging I know I am going to have to do.

When you use APIGateway with Lambda with a websocket, there are three routes you can listen to: $connect, $default and $disconnect. I am just going to listen to $default and so nothing is going to happen until I send a message - in onopen. In reality, it would generally be necessary for the user to perform some kind of security, so it's rare that you can in fact do anything on the server side before the client sends their first message anyway. Consider passing over our user name as "doing security".

And we change stocks.js to use this (keeping the old code around commented out "just in case"):
// import { MockStockQuoter } from './mockstockquoter.js';
import { WebsocketStockQuoter } from './webstockquoter.js';
import { QuoteWatcher } from './quotewatcher.js';

window.addEventListener("load", () => {
    var table = document.querySelector(".stock-table tbody");
    var templ = document.getElementById("stockrow");
    var quoteWatcher = new QuoteWatcher(table, templ);
    var quoter = new WebsocketStockQuoter("wss://n2n2psybtd.execute-api.us-east-1.amazonaws.com/development/");
    // var quoter = new MockStockQuoter();
    quoter.provideQuotesTo(quoteWatcher);
});

NEPTUNE_WEBSOCKET_QUOTER:neptune/app/js/stocks.js

Defining the Infrastructure

In order to work in the AWS cloud, we need to deploy a lot of infrastructure. In large part, it was this specific use case that set me down the path of writing my own, "sensible" deployer tool, so let's use it here. Hopefully the commentary within the script explains what is going on.
We want to have a web app that can access Neptune.  In order for that to work, we need to
define and declare a lambda, and then connect it through an API Gateway.  We actually need
multiple units to handle watching prices and updating them

First off, we need an S3 bucket to store our code in

        ensure aws.S3.Bucket "ignorance-bucket.blogger.com" => bucket
            @teardown preserve

We need to recover the VPC we have put Neptune in

        find aws.VPC.VPC "Test" => vpc

Then we need a lambda

        lambda.function "watch-lambda" => watch_lambda
            @teardown delete
            Runtime <- "go"
            Code <= aws.S3.Location
                Bucket <- "ignorance-bucket.blogger.com"
                Key <- "lambda-watch.zip"

The role for the lambda needs to say that it can be assumed by lambda,
and then needs to have the permissions to set up the VPC, along with
permissions to access other services we will need.

            Role <= aws.IAM.WithRole "ignorance-lambda-role"
                assume
                    allow "sts:AssumeRole"
                        principal "Service" "lambda.amazonaws.com"
                policy

These permissions are needed to allow the lambda to configure its VPC

                    allow aws.action.ec2.CreateNetworkInterface "*"
                    allow aws.action.ec2.DescribeNetworkInterfaces "*"
                    allow aws.action.ec2.DeleteNetworkInterface "*"

These allow the lambda to write to CloudWatch

                    allow "logs:CreateLogGroup" "*"
                    allow "logs:CreateLogStream" "*"
                    allow "logs:PutLogEvents" "*"

This permission is a weird one, and it's hard to track down a definitive reference for the resource API,
but it's what allows the lambda to send websocket messages.  The resource pattern is:

arn:aws:execute-api:REGION:ACCOUNT:GWID/STAGE/METHOD/@connections/CONNECTION-ID

You obviously stand no chance of guessing the connection ID, but we could generate the rest exactly when
we create the APIGW below, but it's easier to just do this.  Note that, unlike most resource IDs in permissions,
something needs to appear in both the REGION and ACCOUNT slots (I have used *); it is not acceptable to
just leave them blank between colons.

                    allow "execute-api:ManageConnections" "arn:aws:execute-api:*:*:*/development/*/@connections/*"

Lambda has a lot of complicated features, but we will just set up to use
the basic publication and alias features using the alias "next"

            PublishVersion <- true
            Alias <- "next"

In order to access Neptune, the Lambda needs to be in the same VPC.  For whatever
reason, we can't just specify the VPC name, we have to find it (above) and then
copy across the Subnets and Security Groups.

            VpcConfig <= aws.VPC.Config
                Subnets <- vpc->subnets
                SecurityGroups <- vpc->securityGroups

We want to work through the alias, which was just created, so find that specifically

        find aws.Lambda.Alias "next" => nextAlias
            FunctionName <- "watch-lambda"

Now we need to define a websocket API Gateway to access this lambda:

        api.gatewayV2 "stock-watch"
            @teardown delete
            Protocol <- "websocket"
            RouteSelectionExpression <- "$request.body.action"
            integration "lambda"
                Type <- "AWS_PROXY"
                Uri <- nextAlias->arn
            route "$default" "lambda"
            stage "development"

NEPTUNE_DEPLOYER_API:neptune/dply/infrastructure.dply

So now we have all the pieces in place to run our websocket quoter, but we still need the code for a lambda, which we're going to write in Go.

The Lambda

We're going to start with a fairly simple lambda that responds to input and sends a static stock price back.

First, the obligatory ceremony at the top of the file:
package main

import (
    "context"
    "encoding/json"
    "log"
    "net/url"

    "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/service/apigatewaymanagementapi"
    transport "github.com/aws/smithy-go/endpoints"
)

NEPTUNE_WATCH_LAMBDA:neptune/lambda/watch/main.go

Skipping to the end, AWS requires that every lambda has a main() function that calls lambda.Start() with the actual handler, which has one of a variety of defined function types.
func main() {
    lambda.Start(handleRequest)
}

NEPTUNE_WATCH_LAMBDA:neptune/lambda/watch/main.go

The input message is an event which has a defined type in AWS, events.APIGatewayWebsocketProxyRequest, which contains all the elements we need such as the ConnectionId and the Body as a string. We can then unmarshal that to our own event. So we define our input and output types as follows:
type messagePayload struct {
    Action string `json:"action"`
    Userid string `json:"id"`
}

type quotesPayload struct {
    Action       string  `json:"action"`
    ConnectionId string  `json:"connection-id"`
    Quotes       []Quote `json:"quotes"`
}

type Quote struct {
    Ticker string `json:"ticker"`
    Price  int    `json:"price"`
}

NEPTUNE_WATCH_LAMBDA:neptune/lambda/watch/main.go

And then we can actually handle the request in the handleRequest method (the name of this method is not significant; it is the function passed in the argument to lambda.Start). This starts by unpacking the event into the messagePayload structure, and then creating a client to send messages back across the websocket.
var apiClient *apigatewaymanagementapi.Client

func handleRequest(ctx context.Context, event events.APIGatewayWebsocketProxyRequest) error {
    if event.IsBase64Encoded {
        log.Printf("cannot unmarshal request in base64")
        return nil
    }

    var request messagePayload
    if err := json.Unmarshal([]byte(event.Body), &request); err != nil {
        log.Printf("Failed to unmarshal body: %v", err)
        return err
    }

    if apiClient == nil {
        apiClient = NewAPIGatewayManagementClient(event.RequestContext.DomainName, event.RequestContext.Stage)
    }

NEPTUNE_WATCH_LAMBDA:neptune/lambda/watch/main.go

The action field determines what we are being asked to do, and so we can switch on that. At the moment, we only have only type of incoming action, which is to identify the user interested in receiving stock quotes. For now, when we receive this message, we immediately respond with a static list of quotes, marshalled from the structures above.
    switch request.Action {
    case "user":
        log.Printf("request for stocks for %s\n", request.Userid)
        resp := quotesPayload{Action: "quotes", ConnectionId: event.RequestContext.ConnectionID, Quotes: []Quote{{Ticker: "EIQQ", Price: 2200}}}
        msgData, err := json.Marshal(&resp)
        if err != nil {
            return err
        }
        connectionInput := &apigatewaymanagementapi.PostToConnectionInput{
            ConnectionId: aws.String(event.RequestContext.ConnectionID),
            Data:         msgData,
        }
        _, err = apiClient.PostToConnection(context.TODO(), connectionInput)
        log.Printf("sent message to %s, err = %v\n", event.RequestContext.ConnectionID, err)
        return err
    default:
        log.Printf("cannot handle user request: %s", request.Action)
    }
    return nil
}

NEPTUNE_WATCH_LAMBDA:neptune/lambda/watch/main.go

Finally, in order to send messages across a websocket, you need to create an apigatewaymanagementclient which has to be "told" exactly which APIGateway to connect to. It took me a fair while to figure this out, sampling from a number of sources across the internet, until I reached this which "almost" works:
func NewAPIGatewayManagementClient(domain, stage string) *apigatewaymanagementapi.Client {
    cfg, err := config.LoadDefaultConfig(context.TODO())
    if err != nil {
        log.Printf("could not init config: %v\n", err)
        return nil
    }
    return apigatewaymanagementapi.NewFromConfig(cfg, func(opts *apigatewaymanagementapi.Options) {
        opts.EndpointResolverV2 = &endpointResolver{domain: domain, stage: stage}
    })
}

type endpointResolver struct {
    domain string
    stage  string
}

func (e *endpointResolver) ResolveEndpoint(ctx context.Context, params apigatewaymanagementapi.EndpointParameters) (transport.Endpoint, error) {
    uri := url.URL{Scheme: "https", Host: e.domain, Path: e.stage}
    return transport.Endpoint{URI: uri}, nil
}

NEPTUNE_WATCH_LAMBDA:neptune/lambda/watch/main.go

The core of this is where the code creates a client using the default configuration; this is modified by a function which takes the Options associated with that, and binds the EndpointResolverV2 to the function which defines the uri on the highlighted line.

So far, so good. We can then build and deploy this using the scripts/deploy command. The key to deploying a lambda in Go is to build the binary as an executable called bootstrap, and then put that as the only (or last) file in a zip archive. We can then upload this file to an S3 bucket and point our lambda configuration to that location. The deployer knows to update the function code every time it is asked to update the infrastructure and then to publish it and update the next alias. In short, every time we re-run scripts/deploy, all the necessary work is done to bring the lambda up to date.

We can now reload our webapp, and find out what happens. Which it turns out is nothing. After a while, an error appears in the browser console:
{"message": "Endpoint request timed out", "connectionId":"O9iVddyboAMCJ1A=", "requestId":"O9iVeH0VoAMEtnw="}
It's not entirely clear from this what's gone wrong, but I have past experience with lambdas running in a VPC and I was totally expecting this. By default, lambdas in a VPC cannot access the internet or (almost) any AWS services outside of the VPC. It's a real pain.

Conclusion

We managed to build a lambda, and update our code to connect to it. But as yet, we don't get any stock prices because the lambda cannot connect to the api management gateway. This strikes me as incredibly dumb: to require many lambdas to be in VPCs (because neptune HAS to be in a VPC), but then not allow them to send messages back to their clients.

There are solutions, and in a special bonus episode, we will figure this out next time.

Monday, August 11, 2025

Building a Stock Watching App


Carrying on from where we left off, it's time to build an app to watch these stock prices in Neptune. This consists of three parts:
  • a web app that lives in the browser and listens on a websocket for updates;
  • a web server that lives in the cloud and receives price updates and distributes them to interested listeners;
  • a simple tool for updating the stock prices.
I'm going to do this across four episodes:
  • this time we're going to build the webapp and have it run with a "mock" (or double) price provider, entirely in the browser, so we can have something working;
  • next time, we're going to set up the web server in the cloud and use that to deliver fake starting prices, but we won't generate any updates;
  • the third installment will be to allow us to generate a local publishing tool with updates stock prices and communicates with the cloud, but "hacking in" the websocket subscription;
  • and then we'll connect all of that to our Neptune database, rounding off the corners and getting everything working.

Webapps are Easy

It seems to me that the act of writing a webapp is relatively easy and not really worthy of discussion here, particularly when it is this easy. So I'm just going to present it and provide a handful of comments.

Let's start with the HTML:
<!DOCTYPE html>
<html>
    <head>
        <title>Stock Watcher</title>
        <meta content="width=device-width, initial-scale=1.0" name="viewport"/>
        <link rel="stylesheet" href="css/stocks.css" />
        <script type="module" src="js/stocks.js"></script>
        <template id="stockrow"><tr><td class='ticker'>fred<td class='price'>2020</td></tr></template>
    </head>
    <body>
        <div class="title">Stock Watcher</div>
        <table class="stock-table">
            <tbody>
                <tr><th class="ticker">Ticker</th><th class="price">Price</th></tr>
            </tbody>
        </table>
    </body>
</html>

NEPTUNE_WEBAPP_MOCK:neptune/app/index.html

Yes, it really is this simple. Basically it's a title and a table. The table is, of course, initially empty. Most of the ceremony relates to including JavaScript and CSS. The <meta> tag is required to get out of "stupid mode" on mobile devices.

The only mildly interesting thing here is the <template> line, which defines a template for each row we want to insert into table (one for each stock we will be watching).

The CSS is largely equally dull:
div.title {
    text-align: center;
    font-family: Arial, Helvetica, sans-serif;
    font-size: 2rem;
    font-weight: bold;
    margin-bottom: 0.5rem;
}

th.ticker, th.price {
    font-size: 1.3rem;
}

table.stock-table {
    width: 100%;
    max-width: 600px;
    margin: auto;
}

.ticker {
    text-align: left;
    width: 50%;
}

.price {
    text-align: right;
    width: 50%;
    padding: 5px;
}

td.ticker {
    font-weight: bold;
}

td.price.green {
    background-color: green;
}

td.price.faded {
    background-color: transparent;
    transition: background-color 1.5s;
}

td.price {
    background-color: transparent;
}

NEPTUNE_WEBAPP_MOCK:neptune/app/css/stocks.css

Again, this is fairly vanilla CSS that I threw together (you could probably do better). The one slightly difficult thing is my implementation of the "repeated yellow-fade technique" which involves the class faded with its associated transition. When prices are updated, we want to make them flash green (if they've gone up) or red (if they've gone down). At the moment, our test data is only considering them going up, so I've only got as far as green. But we want this to transition back to transparent after a short while.

The JavaScript

The JavaScript is spread over three files (at the moment):
  • a main file (stocks.js) that sets everything up;
  • a mockquoter that is currently responsible for generating prices, but will eventually be replaced by a websocket listener;
  • a quotewatcher that is responsible for accepting the updates and updating the display.
The stocks.js file basically sets up to handle the load event and get everything running:
import { MockStockQuoter } from './mockstockquoter.js';
import { QuoteWatcher } from './quotewatcher.js';

window.addEventListener("load", () => {
    var table = document.querySelector(".stock-table tbody");
    var templ = document.getElementById("stockrow");
    var quoteWatcher = new QuoteWatcher(table, templ);
    new MockStockQuoter().provideQuotesTo(quoteWatcher);
});

NEPTUNE_WEBAPP_MOCK:neptune/app/js/stocks.js

It pulls the table and the template out of the DOM and passes them to a QuoteWatcher that it creates.

It then creates a MockStockQuoter and tells it to pass quotes to the watcher.

The mockstockquoter.js file is just there to provide some stream of data to check that we can get something working:
class MockStockQuoter {
    constructor() {
        this.eiqq = 2205;
    }

    provideQuotesTo(lsnr) {
        var me = this;
        lsnr.quotes([{ticker: "EIQQ", price: 2197}, {ticker: "MODD", price: 1087}, {ticker: "QUTI", price: 3030}]);


        setTimeout(() => me.nextQuote(lsnr), 1500);
    }

    nextQuote(lsnr) {
        lsnr.quotes([{ticker: "EIQQ", price: this.eiqq}]);
        this.eiqq += 10;
        if (this.eiqq < 2300) {
            setTimeout(() => this.nextQuote(lsnr), 3000);
        }
    }
}

export { MockStockQuoter }

NEPTUNE_WEBAPP_MOCK:neptune/app/js/mockstockquoter.js

The main thing here is provideQuotesTo. This immediately turns around and sends a list of quotes to the listener. It then kicks off a timer to send an update.

The constructor and nextQuote then collaborate to send updates to the EIQQ stock price every 3s until the price goes above 2300. This gives us a repeatable test bed where every time we load the page we receive one initial message and ten updates.

Finally, what passes for heavy lifting here, the code to update the table when we receive updates is in quotewatcher.js:
class QuoteWatcher {
    constructor(table, templ) {
        this.table = table;
        this.templ = templ;
        this.curr = [];
    }

    quotes(lq) {
        for (var q of lq) {
            var matched = false;
            for (var r of this.curr) {
                if (q.ticker == r.ticker) {
                    this.updatePrice(r.elt, q.price);
                    fadeColor(r.elt.querySelector(".price"), "green");
                    matched = true;
                    break;
                }
            }
            if (!matched) {
                var node = this.templ.content.cloneNode(true).children[0];
                node.querySelector(".ticker").innerText = q.ticker;
                this.updatePrice(node, q.price);
                this.table.appendChild(node);

                this.curr.push({ticker: q.ticker, price: q.price, elt: node});
            }
        }
    }

    updatePrice(node, price) {
        var quote = new Intl.NumberFormat("en-US", { style: "currency", currency: "USD" }).format(price/100)
        node.querySelector(".price").innerText = quote;
    }
}

function fadeColor(elt, style) {
    elt.classList.add(style);
    elt.classList.remove("faded");
    setTimeout(() => {
        elt.classList.add("faded");
    }, 10);
}

export { QuoteWatcher } 

NEPTUNE_WEBAPP_MOCK:neptune/app/js/quotewatcher.js

The important method here is quotes: this is called every time there is new quote information and contains a list of pairs of symbol and price. If we already have the symbol, we update the price and make it flash; if we don't have the symbol, we add a new row at the end (yes, we could sort alphabetically, but we can also not). Note that we copy all of the incoming information into our own list (this.curr) and also track the DOM element we created (elt) for easier updating.

The updatePrice method is really just about formatting the price as dollars, given that the data format is in an (integer) number of pence.

The fadeColor function is the implementation of the "repeated yellow fade technique". You can't just add the faded class, because it could already be on the element from "last time". So we need to remove it first. But browsers optimise for change-unchange pairs, so that doesn't do anything either. So we need to remove it now, then come back in "a little while" and add it back on.

Conclusion

Yes, that really is it for the mock application. It really isn't that complicated.