The biggest obstacle remaining is that the nodes are all operating independently, and none of them are checking each other's work, let alone certifying it in a non-repudiatable fashion.
The next three posts are going to tackle that, as we divide the work up:
- In this post, we are going to improve configurability, so that nodes are at least aware of each other's existence;
- In the next post, we will handle the communication by making sure that each node sends and receives the transactions and blocks it is processing;
- In the third post, we will check and store all the incoming messages.
The Current State
As it currently stands, node configuration is a mess. This is mainly because I haven't thought about it very much. Or rather, because the products of my thoughts have not yet made it into the code. So let's start by reviewing my thoughts.There are three different ways in which I visualize a node starting:
- As a chainledger node from the command line;
- As a node within a harness test;
- As a lambda function (or equivalent) launched in an AWS environment.
Clearly, however, from the perspective of a node starting up, it wants an interface that can hide all of this.
In order to run properly, each node needs to know at least:
- It's own name, which is the base url other nodes will use to communicate with it;
- A port to listen on;
- A private signing key;
- A URL to communicate with it ;
- A public key to check the signatures on the records they send.
- In config/config.go, we have a NodeConfig, but it's a struct, not an interface:
type NodeConfig struct {
Name *url.URL
ListenOn string
NodeKey *rsa.PrivateKey
}
STORE_BLOCK:internal/config/config.go
- In chainledger/main.go, we just hardcode in a couple of values and call ReadNodeConfig (inside Start()) to generate a random key:
func main() {
url, _ := url.Parse("https://localhost:5001")
node := clienthandler.NewListenerNode(url, ":5001")
node.Start()
}
STORE_BLOCK:cmd/chainledger/main.go
- In the harness, we read the same couple of values for each node from the configuration file, and likewise call ReadNodeConfig inside Start.
func (nc *NodeConfig) UnmarshalJSON(bs []byte) error {
var wire struct {
Name string
ListenOn string
}
if err := json.Unmarshal(bs, &wire); err != nil {
return err
}
if url, err := url.Parse(wire.Name); err == nil {
nc.Name = url
} else {
return err
}
nc.ListenOn = wire.ListenOn
return nil
}
STORE_BLOCK:internal/config/config.go
- And ReadNodeConfig doesn't do any reading, but generates a new key pair:
func ReadNodeConfig(name *url.URL, addr string) (*NodeConfig, error) {
pk, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return nil, err
}
return &NodeConfig{Name: name, ListenOn: addr, NodeKey: pk}, nil
}
STORE_BLOCK:internal/config/config.go
- The common code to run a node (ListenerNode) is created in NewListenerNode:
func NewListenerNode(name *url.URL, addr string) Node {
return &ListenerNode{name: name, addr: addr, Control: make(types.PingBack)}
}
STORE_BLOCK:internal/clienthandler/node.go
- And then Start looks like this:
func (node *ListenerNode) Start() {
log.Printf("starting chainledger node %s\n", node.name)
clock := &helpers.ClockLive{}
hasher := &helpers.SHA512Factory{}
signer := &helpers.RSASigner{}
config, err := config.ReadNodeConfig(node.name, node.addr)
if err != nil {
fmt.Printf("error reading config: %s\n", err)
return
}
pending := storage.NewMemoryPendingStorage()
resolver := NewResolver(clock, hasher, signer, config.NodeKey, pending)
node.journaller = storage.NewJournaller(node.name.String())
node.runBlockBuilder(clock, node.journaller, config)
node.startAPIListener(resolver, node.journaller)
}
STORE_BLOCK:internal/clienthandler/node.go
So let's rip it up and start again.The Interfaces
Naming things is one of the two irreducibly hard problems of computer science. I'm not very happy with the names I've come up with in this section, so don't be surprised if I change them at some point.We have two different "sorts" of nodes: ones that we want to launch, and ones that we want to talk to. Obviously, each node fills both roles "in the real world", but within the code of a single node, the node that is running is qualitatively different from the others. I have distinguished these as just a plain NodeConfig and a LaunchableNodeConfig if it has the additional properties needed to be launched. When we try and initialize a ListenerNode, we will pass it a LaunchableNodeConfig which, in turn, contains a list of the OtherNodes we want to talk to, each of which is a NodeConfig. Clear?
type NodeConfig interface {
Name() *url.URL
PublicKey() *rsa.PublicKey
}
type LaunchableNodeConfig interface {
NodeConfig
ListenOn() string
PrivateKey() *rsa.PrivateKey
OtherNodes() []NodeConfig
}
REDO_CONFIG:internal/config/config.go
Here the use of NodeConfig in LaunchableNodeConfig is approximately equivalent to saying that the interface LaunchableNodeConfig extends the interface NodeConfig. Or, in other words, all the methods that are declared in NodeConfig are also to be declared in LaunchableNodeConfig.The rest of the code that was in config.go has been thrown away. We are going to do things differently.
Using the Configuration
Now that we have an interface, we can rework the code in Start. Firstly, here is the new code for NewListenerNode:func NewListenerNode(config config.LaunchableNodeConfig) Node {
return &ListenerNode{config: config, Control: make(types.PingBack)}
}
REDO_CONFIG:internal/clienthandler/node.go
Instead of taking a name and an address, this now takes a LaunchableNodeConfig, which has all the information we need to start a node. It stores this in the ListenerNode struct:type ListenerNode struct {
config config.LaunchableNodeConfig
Control types.PingBack
server *http.Server
journaller storage.Journaller
}
REDO_CONFIG:internal/clienthandler/node.go
And inside Start we now have everything we need to pass around:func (node *ListenerNode) Start() {
log.Printf("starting chainledger node %s\n", node.Name())
clock := &helpers.ClockLive{}
hasher := &helpers.SHA512Factory{}
signer := &helpers.RSASigner{}
pending := storage.NewMemoryPendingStorage()
resolver := NewResolver(clock, hasher, signer, node.config.PrivateKey(), pending)
node.journaller = storage.NewJournaller(node.Name())
node.runBlockBuilder(clock, node.journaller, node.config)
node.startAPIListener(resolver, node.journaller)
}
REDO_CONFIG:internal/clienthandler/node.go
So we need to update the chainledger and harness commands to provide this.The chainledger command
The simpler of the two is the chainledger command, but it is also the more dramatic of the two, because it is such a big change.The chainledger command now takes an argument and reads its configuration from that file:
func main() {
if len(os.Args) < 2 {
fmt.Println("Usage: chainledger <config>")
return
}
config := config.ReadNodeConfig(os.Args[1])
node := clienthandler.NewListenerNode(config)
node.Start()
}
REDO_CONFIG:cmd/chainledger/main.go
Although ReadNodeConfig has the same name as a function we used to have, it is a completely new function in a completely new file in the config directory:package config
import (
"crypto/rsa"
"crypto/x509"
"encoding/base64"
"encoding/json"
"io"
"net/url"
"os"
)
type NodeJsonConfig struct {
Name string
ListenOn string
PrivateKey string
PublicKey string
OtherNodes []NodeJsonConfig
}
type NodeConfigWrapper struct {
config NodeJsonConfig
url *url.URL
private *rsa.PrivateKey
public *rsa.PublicKey
others []NodeConfig
}
// ListenOn implements LaunchableNodeConfig.
func (n *NodeConfigWrapper) ListenOn() string {
return n.config.ListenOn
}
// Name implements LaunchableNodeConfig.
func (n *NodeConfigWrapper) Name() *url.URL {
return n.url
}
// OtherNodes implements LaunchableNodeConfig.
func (n *NodeConfigWrapper) OtherNodes() []NodeConfig {
return n.others
}
// PrivateKey implements LaunchableNodeConfig.
func (n *NodeConfigWrapper) PrivateKey() *rsa.PrivateKey {
return n.private
}
// PublicKey implements LaunchableNodeConfig.
func (n *NodeConfigWrapper) PublicKey() *rsa.PublicKey {
return n.public
}
func ReadNodeConfig(file string) LaunchableNodeConfig {
fd, err := os.Open(file)
if err != nil {
panic(err)
}
defer fd.Close()
bytes, _ := io.ReadAll(fd)
var config NodeJsonConfig
json.Unmarshal(bytes, &config)
url, err := url.Parse(config.Name)
if err != nil {
panic("cannot parse url " + config.Name)
}
pkbs, err := base64.StdEncoding.DecodeString(config.PrivateKey)
if err != nil {
panic("cannot parse base64 private key " + config.PrivateKey)
}
pk, err := x509.ParsePKCS1PrivateKey(pkbs)
if err != nil {
panic("cannot parse private key after conversion from " + config.PrivateKey)
}
others := make([]NodeConfig, len(config.OtherNodes))
for i, json := range config.OtherNodes {
bs, err := base64.StdEncoding.DecodeString(json.PublicKey)
if err != nil {
panic("cannot parse base64 public key " + json.PublicKey)
}
pub, err := x509.ParsePKCS1PublicKey(bs)
if err != nil {
panic("cannot parse public key after conversion from " + json.PublicKey)
}
others[i] = &NodeConfigWrapper{config: json, public: pub}
}
return &NodeConfigWrapper{config: config, url: url, private: pk, public: &pk.PublicKey, others: others}
}
REDO_CONFIG:internal/config/node_json_config.go
There's a lot going on here, so let's take it slowly.Because we want to parse a JSON file using the standard json.Unmarshal code provided by Go, we declare a type NodeJsonConfig which exactly matches the JSON input (in terms of strings and nested NodeJsonConfig objects for the remote nodes). We then separately define a NodeConfigWrapper which stores both the underlying configuration and the "parsed" versions of url, private key and public key.
In ReadNodeConfig, the first few lines open the file (checking for errors and deferring the closing of it):
fd, err := os.Open(file)The next few read the file and then parse out the JSON into a NodeJsonConfig struct:
if err != nil {
panic(err)
}
defer fd.Close()
bytes, _ := io.ReadAll(fd)And then the rest of the code turns the various string values (including the nested string values inside the OtherNodes) into the appropriate internal objects. The final line builds up a NodeConfigWrapper object, which can then handle all the config requests easily:
var config NodeJsonConfig
json.Unmarshal(bytes, &config)
return &NodeConfigWrapper{config: config, url: url, private: pk, public: &pk.PublicKey, others: others}We also need to provide an actual JSON configuration file (or two). Here's the first one, for node "5001":
{
"name": "http://localhost:5001",
"listenOn": ":5001",
"privateKey": "MIIEowIBAAKCAQEArcZdpZM6fipqtMMmts3xkD7s6PQWNnF0KCYEESRIebSFX0fKVC8urvF6wkf4EMFDT36bDtzg3Lh/fxaCYadxaxxs36M1MpYRoBi9CX/VyIIiwpej7Zccm2cfGSAghy48ArAX2SPZS0EGEjTNBuVSh+gkFsy3rQkmQs8/XFR5C9iPhpzUCkqhue6k9euyfN14YoOdEB1xlfp42YEXISuhWoMNyN8Qb4qk39JxxsYE7YBxUbIN6gB7Hi8eAoI6bbcITUifGP4Ax0t/O9YnO/kL6h+hEECK4izQU8kKvVE4jNoBScBwfQChD48vFgNdcDAs+4cwyJkMebV8FXplRIKMDwIDAQABAoIBAQCWE/NcxEKII+n0I3aT+ljd0vqYVfW5H1LKOcrZYxSUx6tIFqBPBFC1FiiHEdDT55VSWm1f8LLi7RRvlekUnZ/+eZYtrq6K+cBPHA5m3disSnfqxzv0PcWfEPhyoqR1GyEI0TxHdAZ+T7IGl0Na6ULVzU8dwb//2R8KJCL8gpfn+bui99835zfTCJj0NTTgfb78VpiSASZ2UJK03YZ0w3n+RiuehBErPVe8mxpdQrucgavIvZvCgnifF25FmWHoH3thEM7Mcvauug3qhZeAEeU+4OhNd8kYX5N5AmtvjDWbGzgJMbZZ7/VxIOR+cv9PPwxXK31vGLC3O0pFMXCW0AhBAoGBANtKtHF7vyV1QCibTHf+kRCDty6cNua4HlMy7ICyy7Lnoz/e0Y6FR8JIxTtyr6+wsd0IsefC+X/UTw+8HE2kSgTyqEls/EW07ot1b8AtpY4OR3rbQ1tYGYtM1nacK2goh0E0Hzj+snx6OHxeOe3GK6S3ydqDxvHzlQaGsFUr4DrdAoGBAMrdIGx44LxNpzshD90TzD7K1gi8zZLUWRW9zzC1UoNM/TaSmRsVgfCe2r2U58v1t9rCm5w4L7XYS1D5nigfEQSndi5jhdLM9Zco/B8rYBiBLRzFN458GvWZ5jhJRu9eRt4rXkdl0QW2tSYAl0Jsnw6M8Kkwtgafi7y6oJH1Y2XbAoGAPnU2k6P1O0v77BTfYMXmt1dskx/3GxuRt4yng7hpABmti4GBGiCn4ZQsaNQvadDft97EHQiRW3Ey235uaUbDtkkO2WrrJ0dzMdFO9OOLZbx3a2yL8LZVADHwW3P7gP0aGN4pjmgsmfuNnw6PXUO2JoIaQdyKi1sfNO6jxn5qrRkCgYAoZe6+Czhd52zlFoltMjMbUhNbfBXIJqdy7/Cht4ouAZfvVTROM3ND8q6G0G90q4Moela4vmup39/nyT3YqY8fCSY8yK7ussg5iPzkTCP/3UGZmCCfLFHGFRbGoLkSlAiy15oXx8vfQmpCnh2BKdZm9GQ8nSmymfUe6V9ukZpwvwKBgBuEiNLS2TE8E4V6bhP9mMjDFP3hhIm9BngnJECCW7RdPIiNpwqd4LngUeM5xbRcOoxPBMzXtfJSazFDJcO0mXZgqTDr4cwSon3fVhboyim4JFHWIo5fgnRoH5m6Ty11SzaT73pLsa8g83VECBgR/oWSQqn2EoqYs4xjbCtBBjqX",
"otherNodes": [
{"name": "http://localhost:5002", "publicKey": "MIIBCgKCAQEArRX2JO4Shwb1dsw6/3vIV7aTDWWjEHvI8sYsV3qcRt6pQGMlmLu8+h5Wn76iuM5+TIfTJu8Ct3x/xeD0DrGWgjjTsb8ehMnkzviU+qKOWkeDzqmxRWZNlfayZRxl4gAC8JShQA8mGTs2im8EcJTFP6FsX7aBBpIXiM0C7JHKnmmYGhHJixHl4fPxdnfeunqgJWNuQNZ0sYgcdQcwgZoAAcZUVbLUOLKvkT4odovQLo7knVlfa+2rDt6hJ00v5Q17OCedNyYD16Rp7JBGeV8d9M7ZD7+/gFKzRfSfFONiNO0wXJo4LtgVFMZ3Jr3Z493uOb/po4IR+Ui+ij8YdECsXQIDAQAB"}
]
}
REDO_CONFIG:config/nodes/node5001.json
Where did those private and public keys come from? Well, I created a new command called keypair:package main
import (
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"encoding/base64"
"fmt"
)
func main() {
pk, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
panic("could not generate key")
}
fmt.Printf("private key: %s\n", base64.StdEncoding.EncodeToString(x509.MarshalPKCS1PrivateKey(pk)))
fmt.Printf("public key: %s\n", base64.StdEncoding.EncodeToString(x509.MarshalPKCS1PublicKey(&pk.PublicKey)))
}
REDO_CONFIG:cmd/keypair/main.go
This command generates a new keypair every time it is run, so that you can initialize all the different nodes. This is a tedious business, but in reality you have to do something like this.private key: MIIEpAIBAAKCAQEAwODwdGwL7jGUG7yuuTvRjYz3fQZUrulj5ULzfPKVR01IFQv7GYdWuaRVJtKd6dTDSYRge0tcUp4X9yFiIMBRWVZAOInRHUE8ISN2eGLuznQdRK73U8Dr31l91rrVcp3WGw594Bksmvb47oDZM6Zmywiv5niH0IQ3CHu8BD7fXwo4GTQ4kC7UAVks2HikKUpInsW+tR+yPgajfNsOPM/iiRVG/wTNRnRoRLi0u7+WH14fBTQdA6uvAIxik+dNtQT291hPWL7QcSK5WnNTjqfHwYAmtHTplN2XkLRc0G7YD7l4ZD4EqAFFXK8vmkdJSfIHxmHZmrd1k0WOxDOcsKywOQIDAQABAoIBAFd8L9S+xVKPDlzeYmoGZfBMhl0hJ/wGRJdSnNqJtYgX16AkRQq5Rm8ByNXJJnNPXBzWfGSwM/oNV1VywO2WDc/1vT9n03/vfPSS/0NvrF3ccQIcUnacxOAT2W4yZGqOiPTQx+uDv6WybArSSrKQwYNKN27UMNY1gjjI1ukeE3cpg50V/FrAXZEUUcvh9VS8ixbF9VwCV2b0KwghZjyWQxxdoeCyFssRahfNUyzWSs2ZCN0zFmpQLdNq6/cx94ofnEUZDHuOoSVLCMZMvPcYeg/YlueIU3Z9giHRPw3t7eMJhw1C+7HhwsqfOUlmVWwvkgfYN8fXlsyXgLU627bRmLkCgYEA0zBfjWw6DvW3sg0yka0l3Qiz0TXwtypbhVFydr8JQfKfVIjop0adwzWf9a2vAhJWbmMt+edtOx+FV0AUViP0sT+yhTnq092nS+gGEQuXTqbODW6XKXwnSHS55qo+/FQ8XDiSsuqBkx25BFQvVDWX5JYCjA/m8tv14yQuUOaWgFsCgYEA6c32+8batQPCqaijbtV779IRwRxjrIqjXf/CYJBs118uvbf98SVvxe3XmMgoP0aCArzTLLn53ZYHIrvbhWNbwvs0t7y6We5Ep5nSfdDIo5w7TGjsc3osnXv0qrB8rcmm4J7YfVOgsQ3fhTHgOw5Zf5Dx6qirSLirV6tJ9b2+NfsCgYEAuSyZG+/hmGxrfXuE86bWpFCVGsQpJPHG/cbEjspC28hZXE4PcVzByAClGU4JPc/GaVQdZBo/9K9Ww4I0UrOEQkaPybFW7h5UKoJvj1KSgSxRUAXAFWf/KdDvkAmG4Mkbg+E3ABoPM2fEar9GIJg9bvj5ksX+wsOLfnajBdyp6jECgYAF+qpyTeeR8YKs7A8h6nu86lZh5eP2qaT75mqGJati5qA/YdEwtZBiM27sDVJaK+dvQnz0C92D+S49iShYBO530gzLFhx96EYBM0HazdgTtw8dKSHC4kD51g2vv8uwdhO6ctV+fwEBBiXNNjVRzVAknwRQx/d5aJ+ZIlxF2JBguQKBgQCsWwfln+HK4jWftaJTUbzIQ8dQmogIYTuF3aSc9OeB5222VNO8IeeYttvxI4U1xv3g+JSOIFnTJ+QdX+4LitdiKlqPy8lYV9DcpLLgZQhOTLgilXtcPVhTddSwhcEni9N1UZLu+HO1DF39UN8OYkh0fAmy/CEWCoUaImOwbYZVIg==I then just did the copy-and-paste from the output here into the files. Obviously, if you are doing this seriously, you need to take a lot more care about how you store (and where you show) your private keys.
public key: MIIBCgKCAQEAwODwdGwL7jGUG7yuuTvRjYz3fQZUrulj5ULzfPKVR01IFQv7GYdWuaRVJtKd6dTDSYRge0tcUp4X9yFiIMBRWVZAOInRHUE8ISN2eGLuznQdRK73U8Dr31l91rrVcp3WGw594Bksmvb47oDZM6Zmywiv5niH0IQ3CHu8BD7fXwo4GTQ4kC7UAVks2HikKUpInsW+tR+yPgajfNsOPM/iiRVG/wTNRnRoRLi0u7+WH14fBTQdA6uvAIxik+dNtQT291hPWL7QcSK5WnNTjqfHwYAmtHTplN2XkLRc0G7YD7l4ZD4EqAFFXK8vmkdJSfIHxmHZmrd1k0WOxDOcsKywOQIDAQAB
The harness
The harness is more complicated by far, for two reasons. Firstly, it has to handle multiple nodes. Secondly, I want it to infer from a single data source all the configurations for all the nodes and all the clients. I don't want to have to explain to each node what all the other nodes are (although I may want the ability to fudge something later so that I can test what happens when one or more nodes are misconfigured). And I don't want to go through the tedious business of hand-generating public and private keys when the harness can do all of that itself.In reworking this, I also decided that my previous layout with nodes and clients separate was not the best fit for this brave new world, so I reworked the configuration like so:
{
"nodes": [{
"name": "http://localhost:5001",
"listenOn": ":5001",
"clients": [
{ "user": "https://user1.com/", "count": 10 },
{ "user": "https://user2.com/", "count": 2 }
]
},
{
"name": "http://localhost:5002",
"listenOn": ":5002",
"clients": [
{ "user": "https://user1.com/", "count": 5 },
{ "user": "https://user2.com/", "count": 6 }
]
}]
}
REDO_CONFIG:config/harness/node_2.json
Now the clients associated with each node are embedded within the definition of the node. I commented at the time that I wanted them separate so that I could run clients without running nodes. If I still want to do that, I will have to figure out how to make that happen. The obvious thing would be to remove the listenOn field, since that would stop the nodes being launched. We could also add an extra launch field on each node, which, when set to true would launch the node but, when set to false, would only launch the clients.To read this configuration, I have used much the same technique as for the single node, of unpacking the JSON into a "holding" configuration, and then building out the actual configurations on top of that:
type HarnessConfig struct {
Nodes []*HarnessNode
keys map[string]*rsa.PrivateKey
}
type HarnessNode struct {
Name string
ListenOn string
Clients []*CliConfig
url *url.URL
}
type CliConfig struct {
User string
Count int
}
REDO_CONFIG:internal/harness/config.go
Here HarnessConfig holds the overall configuration - a list of nodes (we'll come back to the map keys). Each node is represented by a NodeConfig (we'll come back to the url), and the each of the clients is read into a CliConfig struct.This is read from the configuration file by ReadConfig as before, although the contents of the function have changed dramatically:
func ReadConfig(file string) Config {
fd, err := os.Open(file)
if err != nil {
panic(err)
}
defer fd.Close()
bytes, _ := io.ReadAll(fd)
var ret HarnessConfig
json.Unmarshal(bytes, &ret)
ret.keys = make(map[string]*rsa.PrivateKey)
for _, n := range ret.Nodes {
name := n.Name
url, err := url.Parse(name)
if err != nil {
panic("could not parse name " + name)
}
n.url = url
pk, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
panic("key generation failed")
}
ret.keys[name] = pk
}
return &ret
}
REDO_CONFIG:internal/harness/config.go
The first two blocks operate in much the same way as the node configuration above, but note that we slip in an initialization of the keys map.The range loop iterates over all the nodes, parsing the node name and storing it in url and then generating a private key and storing it in the keys map. (We could have chosen different strategies for storing these; this is just what came naturally to me at the time.)
This is basically a direct representation of the data that we read from the harness configuration file. We want to use this information to generate information about:
- each node we want to launch;
- each client we want to launch;
- all the remote nodes as viewed from any given node.
type Config interface {
NodeNames() []string
Launcher(forNode string) config.LaunchableNodeConfig
Remote(forNode string) config.NodeConfig
ClientsFor(forNode string) []*CliConfig
}
REDO_CONFIG:internal/harness/config.go
which represents all the operations we will want to carry out from within the harness driver.HarnessConfig then implements these various methods given the data it has loaded and generated.
It can return the list of node names thus:
func (c *HarnessConfig) NodeNames() []string {
ret := make([]string, len(c.Nodes))
for i, n := range c.Nodes {
ret[i] = n.Name
}
return ret
}
REDO_CONFIG:internal/harness/config.go
This allocates an array of strings and then goes through the list of nodes storing the name of each node in the array. Finally it returns them. Remote and ClientsFor similarly scan the list of nodes, returning the appropriate entries:func (c *HarnessConfig) Remote(forNode string) config.NodeConfig {
for _, n := range c.Nodes {
if n.Name == forNode {
return &HarnessRemote{from: n, public: &c.keys[forNode].PublicKey}
}
}
panic("no node found for " + forNode)
}
// ClientsPerNode implements Config.
func (c *HarnessConfig) ClientsFor(forNode string) []*CliConfig {
for _, n := range c.Nodes {
if n.Name == forNode {
return n.Clients
}
}
panic("no node found for " + forNode)
}
REDO_CONFIG:internal/harness/config.go
(As an aside, I notice when reviewing this that while I have refactored this to change the function names, the comments have not been updated. These comments seem cool, but if they are not going to be kept up to date, they really aren't.)Finally, Launcher does much the same thing, but it returns a HarnessLauncher which is quite a complex beast in its own right.
func (c *HarnessConfig) Launcher(forNode string) config.LaunchableNodeConfig {
for _, n := range c.Nodes {
if n.Name == forNode {
return &HarnessLauncher{config: c, launching: n, private: c.keys[n.Name], public: &c.keys[n.Name].PublicKey}
}
}
panic("no node found for " + forNode)
}
REDO_CONFIG:internal/harness/config.go
So much so that I put it in its own file, which I'll reproduce here with almost no commentary. It's much like HarnessConfig but on a smaller scale.package harness
import (
"crypto/rsa"
"net/url"
"github.com/gmmapowell/ChainLedger/internal/config"
)
type HarnessLauncher struct {
config *HarnessConfig
launching *HarnessNode
private *rsa.PrivateKey
public *rsa.PublicKey
}
// Name implements config.LaunchableNodeConfig.
func (h *HarnessLauncher) Name() *url.URL {
return h.launching.url
}
// PublicKey implements config.LaunchableNodeConfig.
func (h *HarnessLauncher) PublicKey() *rsa.PublicKey {
return &h.config.keys[h.launching.Name].PublicKey
}
// ListenOn implements config.LaunchableNodeConfig.
func (h *HarnessLauncher) ListenOn() string {
return h.launching.ListenOn
}
// OtherNodes implements config.LaunchableNodeConfig.
func (h *HarnessLauncher) OtherNodes() []config.NodeConfig {
ret := make([]config.NodeConfig, len(h.config.NodeNames())-1)
j := 0
for _, n := range h.config.NodeNames() {
if n == h.launching.Name {
continue
}
ret[j] = h.config.Remote(n)
j++
}
return ret
}
// PrivateKey implements config.LaunchableNodeConfig.
func (h *HarnessLauncher) PrivateKey() *rsa.PrivateKey {
return h.private
}
REDO_CONFIG:internal/harness/harness_launcher.go
The one thing that feels worth pointing out is the OtherNodes method (which is not yet used, but will be in the next episode). This has to return a list of all the nodes except the current node under consideration. When the HarnessLauncher is created, it is a configuration created from the list of all nodes for a specific node. So by going back to the original configuration - and specifically the list of all the nodes - we can find a list of all the nodes except this one, and then ask the configuration to return us the Remote configuration for all of those. You'll notice that we again allocate a slice with the correct number of elements in it so that we don't need to use append or reallocate the slice at any point.Conclusion
This feels as if we have reached an abrupt end, but that's possibly because we did things backwards and wrote all the code to launch the nodes first, and then came back and did all the work with the configuration. In doing so, I tried to change everything as little as possible, so most of the harness code didn't even change.All we were really trying to do was to set up the node configuration so that each node could ask for the URLs and public keys of the other nodes in the system.
No comments:
Post a Comment