Wednesday, March 5, 2025

Signing the Weaves

At first glance, it might seem that signing off on the weaves is simple, and I should have just done it in the last episode.

While that's not entirely untrue, most of the complexity is in looking at everyone else's signatures and asserting that we have all signed off on the same thing. And in order to do that, I'm going to use a third kind of journal.

One option would be to continually update the Weave objects with each signature that comes in, but as I've said before, I don't like any solution that involves re-writing history.

The journalling technique we have been using - recording everything in a chain - doesn't really apply here, because logically if not physically the signature is associated with the Weave.

So what we need is something akin to a pair of tables linked by an id, implemented in a journal where we only ever add new entries. That is, we want to be able to say that for any given Weave, we can add one signature for each node in the system. We will only accept and store:
  • signatures from a node we recognize which are valid for that node and the specified weave id;
  • signatures where we have calculated the same Weave.ID locally;
  • signatures where we do not already have a signature for that node and weave.
Note that there is a race condition here in that the various nodes will calculate the weaves at different times. When we start actively considering the properties of distributed systems, we will come back and address that. However, we know that, for now, if we wait half a second or so, we will have calculated the weave. So we will simply do that: wait half a second before processing a remotely signed Weave.

Signing and Publishing

Part of generating every Weave is to sign it (i.e. sign the hashed ID of the Weave) and tell all the other nodes about the fact that we have generated a Weave and signed it.

It's not hard to sign the Weave; once we have done that, we can copy code from the block builder again, and ask the Weave to marshal itself (along with our node name and signature) and send the resulting blob to a list of BinarySenders:
func (t *IntervalLoomThread) Run() {
    delay := time.Duration(t.interval/3) * time.Millisecond
    timer := t.clock.After(delay)
    var prev *records.Weave

    for {
        select {
        case <-t.control:
            log.Printf("%s weaver asked to quit\n", t.loom.Name())
            return
        case weaveBefore := <-timer:
            weaveBefore = weaveBefore.RoundTime(t.interval)
            if !t.myjournal.HasWeaveAt(weaveBefore) {
                weave := t.loom.WeaveAt(weaveBefore, prev)
                if weave != nil {
                    t.myjournal.StoreWeave(weave)
                    signature, err := t.signer.Sign(t.pk, weave.ID)
                    if err != nil {
                        log.Printf("%s failed to sign weave %v\n", t.loom.Name(), weave.ID)
                    } else {
                        log.Printf("%s wove at %v: %s\n", t.loom.Name(), weaveBefore, weave.ID.String())
                        weave.MarshalAndSend(t.senders, t.loom.Name(), signature)
                    }
                    // weave.LogMe(t.loom.Name())
                    prev = weave
                } else {
                    log.Printf("%s could not weave at %v\n", t.loom.Name(), weaveBefore)
                }
            }
        }
        timer = t.clock.After(delay)
    }
}

WEAVE_SIGN_PUBLISH:internal/loom/loom_thread.go

The marshalling and sending happens in Weave:
func (w *Weave) MarshalAndSend(senders []helpers.BinarySender, node string, sig types.Signature) {
    blob, err := w.MarshalBinary(node, sig)
    if err != nil {
        log.Printf("Error marshalling weave: %v %v\n", w.ID, err)
        return
    }
    for _, bs := range senders {
        go bs.Send("/remoteweave", blob)
    }
}

WEAVE_SIGN_PUBLISH:internal/records/weave.go

As with Block, the marshalling is delegated to MarshalBinary:
func (w *Weave) MarshalBinary(node string, sig types.Signature) ([]byte, error) {
    ret := types.NewBinaryMarshallingBuffer()

    // Marshal in the things that "belong to" the weave
    w.ID.MarshalBinaryInto(ret)
    w.ConsistentAt.MarshalBinaryInto(ret)
    w.PrevID.MarshalBinaryInto(ret)
    types.MarshalInt32Into(ret, int32(len(w.LatestBlocks)))
    for _, nb := range w.LatestBlocks {
        nb.MarshalBinaryInto(ret)
    }

    // and now marshal in the name and signature
    types.MarshalStringInto(ret, node)
    sig.MarshalBinaryInto(ret)

    return ret.Bytes(), nil
}

WEAVE_SIGN_PUBLISH:internal/records/weave.go

which in turn delegates some of its work to NodeBlock:
func (n *NodeBlock) MarshalBinaryInto(into *types.BinaryMarshallingBuffer) error {
    types.MarshalStringInto(into, n.NodeName)
    n.LatestBlockID.MarshalBinaryInto(into)
    return nil
}

WEAVE_SIGN_PUBLISH:internal/records/nodeblock.go

With that in place, we can run it again and, as we have come to expect, we see 404 return codes from the remote nodes:
2025/03/05 12:08:38 sending blob(408) to http://localhost:5001/remoteweave
2025/03/05 12:08:38 sending blob(408) to http://localhost:5002/remoteweave
2025/03/05 12:08:38 bad status code sending to http://localhost:5001/remoteweave: 404
2025/03/05 12:08:38 bad status code sending to http://localhost:5002/remoteweave: 404
As always, a lot of plumbing (and some refactoring) went on off-camera.

Listening for Other Nodes' Weaves

So, let's fix that 404. It's really no different to listening for messages or blocks. We add a handler to the client API in the node:
func (node *ListenerNode) startAPIListener(resolver Resolver, journaller storage.Journaller, senders []helpers.BinarySender) {
    cliapi := http.NewServeMux()
    pingMe := PingHandler{}
    cliapi.Handle("/ping", pingMe)
    storeRecord := NewRecordStorage(resolver, journaller, senders)
    cliapi.Handle("/store", storeRecord)
    remoteTxHandler := internode.NewTransactionHandler(node.config)
    cliapi.Handle("/remotetx", remoteTxHandler)
    remoteBlockHandler := internode.NewBlockHandler(node.config)
    cliapi.Handle("/remoteblock", remoteBlockHandler)
    remoteWeaveHandler := internode.NewWeaveHandler(node.config)
    cliapi.Handle("/remoteweave", remoteWeaveHandler)
    node.server = &http.Server{Addr: node.config.ListenOn(), Handler: cliapi}
    err := node.server.ListenAndServe()
    if err != nil && !errors.Is(err, http.ErrServerClosed) {
        fmt.Printf("error starting server: %s\n", err)
    }
}

WEAVE_HANDLER:internal/clienthandler/node.go

And implement the ServeHTTP method in WeaveHandler:
package internode

import (
    "io"
    "log"
    "net/http"

    "github.com/gmmapowell/ChainLedger/internal/config"
    "github.com/gmmapowell/ChainLedger/internal/records"
)

type WeaveHandler struct {
    nodeConfig config.LaunchableNodeConfig
}

// ServeHTTP implements http.Handler.
func (t *WeaveHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) {
    buf, err := io.ReadAll(req.Body)
    if err != nil {
        log.Printf("could not read the buffer from the request")
        return
    }
    log.Printf("%s: received an internode block length: %d\n", t.nodeConfig.Name(), len(buf))
    weave, signer, err := records.UnmarshalBinaryWeave(buf)
    if err != nil {
        log.Printf("could not unpack the internode weave: %v\n", err)
        return
    }
    log.Printf("unmarshalled weave message to: %v\n", weave)
    storer := t.nodeConfig.RemoteStorer(signer.Signer.String())
    if storer == nil {
        log.Printf("could not find a handler for remote node %s\n", signer.Signer.String())
        return
    }

    // Now we need to compare and record this
}

func NewWeaveHandler(c config.LaunchableNodeConfig) *WeaveHandler {
    return &WeaveHandler{nodeConfig: c}
}

WEAVE_HANDLER:internal/internode/weavehandler.go

We have to implement the unmarshaller. Note that because the message being sent across consists of two separate elements - the weave and the "additional information" (the node signature) - we end up with three return values from the unmarshaller.
func UnmarshalBinaryWeave(bytes []byte) (*Weave, *types.Signatory, error) {
    weave := Weave{}
    buf := types.NewBinaryUnmarshallingBuffer(bytes)
    var err error
    weave.ID, err = types.UnmarshalHashFrom(buf)
    if err != nil {
        return nil, nil, err
    }
    weave.ConsistentAt, err = types.UnmarshalTimestampFrom(buf)
    if err != nil {
        return nil, nil, err
    }
    weave.PrevID, err = types.UnmarshalHashFrom(buf)
    if err != nil {
        return nil, nil, err
    }
    nblks, err := types.UnmarshalInt32From(buf)
    if err != nil {
        return nil, nil, err
    }
    weave.LatestBlocks = make([]NodeBlock, nblks)
    for i := 0; i < int(nblks); i++ {
        weave.LatestBlocks[i], err = UnmarshalBinaryNodeBlock(buf)
        if err != nil {
            return nil, nil, err
        }
    }

    signer := types.Signatory{}
    cls, err := types.UnmarshalStringFrom(buf)
    if err != nil {
        return nil, nil, err
    }
    signer.Signer, err = url.Parse(cls)
    if err != nil {
        return nil, nil, err
    }
    signer.Signature, err = types.UnmarshalSignatureFrom(buf)
    if err != nil {
        return nil, nil, err
    }

    err = buf.ShouldBeDone()
    if err != nil {
        return nil, nil, err
    }

    return &weave, &signer, nil
}

WEAVE_HANDLER:internal/records/weave.go

And then we need to handle the individual NodeBlock elements:
func UnmarshalBinaryNodeBlock(buf *types.BinaryUnmarshallingBuffer) (NodeBlock, error) {
    ret := NodeBlock{}
    var err error
    ret.NodeName, err = types.UnmarshalStringFrom(buf)
    if err != nil {
        return ret, err
    }
    ret.LatestBlockID, err = types.UnmarshalHashFrom(buf)

    return ret, err
}

WEAVE_HANDLER:internal/records/nodeblock.go

Assuming everything went well, we should now have the Weave built by the remote node, along with their name and signature.

Checking the Weaves

We now need to check that the weave is something we would consider "valid" as per our definition earlier. But this is where we want to take a moment to pause, reflect, and allow our own loom thread to catch up. So first, we wait half a second:
// ServeHTTP implements http.Handler.
func (t *WeaveHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) {
    buf, err := io.ReadAll(req.Body)
    if err != nil {
        log.Printf("could not read the buffer from the request")
        return
    }
    log.Printf("%s: received an internode block length: %d\n", t.nodeConfig.Name(), len(buf))
    weave, signatory, err := records.UnmarshalBinaryWeave(buf)
    if err != nil {
        log.Printf("could not unpack the internode weave: %v\n", err)
        return
    }
    log.Printf("unmarshalled weave message to: %v\n", weave)
    storer := t.nodeConfig.RemoteStorer(signatory.Signer.String())
    if storer == nil {
        log.Printf("could not find a handler for remote node %s\n", signatory.Signer.String())
        return
    }

    // Hack-ish: wait 500ms so that our local node has built its own
    timer := t.clock.After(delay)
    <-timer
Then we will check the weave is valid and store the signature in the appropriate journal. This allows us to persistently record all the (valid) signatures we received from remote nodes for each weave:
    // Tell the storer for that node that we have this signature
    err = storer.SignedWeave(weave, signatory.Signature)
    if err != nil {
        panic(fmt.Sprintf("%s: cannot accept signed weave: %v", t.nodeConfig.Name(), err))
    }
}

WEAVE_CHECK_STORE_REMOTE:internal/internode/weavehandler.go

This delegates most of the work to the RemoteStorer:
func (cas *CheckAndStore) SignedWeave(weave *records.Weave, signature types.Signature) error {
    err := weave.VerifySignatureIs(cas.hasher, cas.signer, cas.key, signature)
    if err != nil {
        return err
    }
    return cas.journal.RecordWeaveSignature(weave.ConsistentAt, weave.ID, signature);
}

WEAVE_CHECK_STORE_REMOTE:internal/storage/remotestorer.go

This verifies the signature in the obvious way:
func (w *Weave) VerifySignatureIs(hasher helpers.HasherFactory, signer helpers.Signer, pub *rsa.PublicKey, signature types.Signature) error {
    id := w.HashMe(hasher)
    if !id.Is(w.ID) {
        return fmt.Errorf("remote weave id %s was not the result of computing it locally: %s", w.ID.String(), id.String())
    }
    return signer.Verify(pub, id, signature)
}

WEAVE_CHECK_STORE_REMOTE:internal/records/weave.go

In passing, note that we also need to record our own signature of the Weave as we generate it. That happens in loom_thread.go:
func (t *IntervalLoomThread) Run() {
    delay := time.Duration(t.interval/3) * time.Millisecond
    timer := t.clock.After(delay)
    var prev *records.Weave

    for {
        select {
        case <-t.control:
            log.Printf("%s weaver asked to quit\n", t.loom.Name())
            return
        case weaveBefore := <-timer:
            weaveBefore = weaveBefore.RoundTime(t.interval)
            if !t.myjournal.HasWeaveAt(weaveBefore) {
                weave := t.loom.WeaveAt(weaveBefore, prev)
                if weave != nil {
                    t.myjournal.StoreWeave(weave)
                    signature, err := t.signer.Sign(t.pk, weave.ID)
                    if err != nil {
                        log.Printf("%s failed to sign weave %v\n", t.loom.Name(), weave.ID)
                    } else {
                        t.myjournal.RecordWeaveSignature(weave.ConsistentAt, weave.ID, signature)
                        log.Printf("%s wove at %v: %s\n", t.loom.Name(), weaveBefore, weave.ID.String())
                        weave.MarshalAndSend(t.senders, t.loom.Name(), signature)
                    }
                    // weave.LogMe(t.loom.Name())
                    prev = weave
                } else {
                    log.Printf("%s could not weave at %v\n", t.loom.Name(), weaveBefore)
                }
            }
        }
        timer = t.clock.After(delay)
    }
}

WEAVE_CHECK_STORE_REMOTE:internal/loom/loom_thread.go

The journal is responsible for keeping track of the signatures it has seen for each block. In memory, this is most easily tracked as a map. Note that doing this does not invalidate my rule that we should never update structures, because this is not actually a structure, just a way of tracking all of them. We do not ever overwrite an entry in this map, as should be clear from the code:
func LaunchJournalThread(name string, onNode string, finj helpers.FaultInjection) chan<- JournalCommand {
    var txs []*records.StoredTransaction
    var blocks []*records.Block
    weaves := make(map[types.Timestamp]*records.Weave)
    sigs := make(map[types.Timestamp]types.Signature)
    ret := make(chan JournalCommand, 20)
...
            case JournalRecordWeaveSignatureCommand:
                if sigs[v.When] != nil {
                    log.Printf("duplicate signature for weave at %d", v.When)
                } else {
                    sigs[v.When] = v.Signature
                }
...
}

WEAVE_CHECK_STORE_REMOTE:internal/storage/journal_thread.go

So now we should have all of the signatures for all of the weaves safely stored away.

Collating the Weaves

Unfortunately, this does not meet the condition I set out above when I said that we would need "a new sort" of journal. All of this information is spread around the system and not consolidated in a single place.

So let's create a new sort of journal, a WeaveConsolidator and have this be the place where all the information about weaves comes together:
package storage

import (
    "log"

    "github.com/gmmapowell/ChainLedger/internal/records"
    "github.com/gmmapowell/ChainLedger/internal/types"
)

type WeaveConsolidator struct {
    commandChan chan<- WeaveConsolidationCommand
}

type WeaveConsolidationCommand interface{}

type WeaveCreatedLocally struct {
    when types.Timestamp
    id   types.Hash
}

type WeaveSigned struct {
    when      types.Timestamp
    id        types.Hash
    by        string
    signature types.Signature
}

type WeaveAndSignatures struct {
    when       types.Timestamp
    id         types.Hash
    signatures map[string]types.Signature
}

func consolidate(onNode string, ch <-chan WeaveConsolidationCommand) {
    consolidation := make(map[types.Timestamp]*WeaveAndSignatures)
    for {
        cmd := <-ch
        switch v := cmd.(type) {
        case WeaveCreatedLocally:
            log.Printf("%s: consolidating weave for %d\n", onNode, v.when)
            if consolidation[v.when] == nil {
                consolidation[v.when] = &WeaveAndSignatures{when: v.when, id: v.id, signatures: make(map[string]types.Signature)}
            } else {
                log.Printf("cannot create weave for %d more than once\n", v.when)
            }
        case WeaveSigned:
            log.Printf("%s: consolidating signature by %s for weave for %d\n", onNode, v.by, v.when)
            if consolidation[v.when] != nil {
                addSig := consolidation[v.when]
                if addSig.signatures[v.by] != nil {
                    log.Printf("cannot add signature to weave for %d by %s more than once\n", v.when, v.by)
                } else if !addSig.id.Is(v.id) {
                    log.Printf("cannot add signature to weave for %d by %s because hash values do not match\n", v.when, v.by)
                } else {
                    addSig.signatures[v.by] = v.signature
                }
            } else {
                log.Printf("cannot sign weave for %d yet, because it has not been created locally\n", v.when)
            }
        default:
            log.Printf("there is no case for command %v", v)
        }
    }
}

func (wc *WeaveConsolidator) LocalWeave(w *records.Weave) {
    cmd := WeaveCreatedLocally{when: w.ConsistentAt, id: w.ID}
    wc.commandChan <- cmd
}

func (wc *WeaveConsolidator) SignedWeave(when types.Timestamp, id types.Hash, by string, sig types.Signature) {
    cmd := WeaveSigned{when: when, id: id, by: by, signature: sig}
    wc.commandChan <- cmd
}

func NewWeaveConsolidator(onNode string) *WeaveConsolidator {
    ch := make(chan WeaveConsolidationCommand, 20)
    go consolidate(onNode, ch)
    return &WeaveConsolidator{commandChan: ch}
}

WEAVE_CONSOLIDATOR:internal/storage/weaveconsolidator.go

This is basically just copying the API pattern of the journal: there is an API class (WeaveConsolidator) which is basically just a wrapper for a channel. It's methods create the various structs and push them down the channel. On the other end a goroutine launched by NewWeaveConsolidator reads the commands and dispatches them sequentially, thus ensuring that there are no race conditions.

Once again, I can hear the objection that "this is continually updating the records in storage" and with some justification. The truth of the matter is that I thought long and hard about it and decided this was the most reasonable in-memory implementation. What I am trying to go for here is a relational-style model where there is a table recording all of the weaves that have been created, and then an ancillary table which contains the "weaveId" (not the ID of the weave, but just an integer index) along with the signatory information. I think you would agree that only inserting into one or the other of these tables would be "insert only" and would have much the same semantics as we have here. The only problem is that, unless I have missed something, that is not at all an easy thing to implement in the data structures I have available to me. Hence this. We should return to this when we attempt to build all this on top of DynamoDB; hopefully that will convince any doubters.

OK, we're not actually using this yet. So let's do that. We want to call it from the local journal when it sees a new Weave being created and from all the journals when they record a signature.

First off, the consolidator is passed in to all the journal threads. This is a long chain of plumbing that goes back to creating exactly one consolidator per node in the configuration (either node configuration or harness configuration). That is a lot of effort, so not shown here.
func LaunchJournalThread(name string, onNode string, consolidator *WeaveConsolidator, finj helpers.FaultInjection) chan<- JournalCommand {
    var txs []*records.StoredTransaction
    var blocks []*records.Block
We then use this when we are storing the Weave, but only if this is the journal for the local node:
            case JournalStoreWeaveCommand:
                weaves[v.Weave.ConsistentAt] = v.Weave
                if name == onNode {
                    consolidator.LocalWeave(v.Weave)
                }
And we use it again when we are recording the signature for any journal:
            case JournalRecordWeaveSignatureCommand:
                if sigs[v.When] != nil {
                    log.Printf("duplicate signature for weave at %d", v.When)
                } else {
                    sigs[v.When] = v.Signature
                    consolidator.SignedWeave(v.When, v.ID, name, v.Signature)
                }
...
}

WEAVE_CONSOLIDATOR:internal/storage/journal_thread.go

And, unsurprisingly, we now see the relevant messages coming out from the consolidator:
2025/03/05 15:11:52 http://localhost:5002: consolidating weave for 1741187512000
2025/03/05 15:11:52 http://localhost:5001: consolidating weave for 1741187512000
...
2025/03/05 15:11:52 http://localhost:5002: consolidating signature by http://localhost:5002 for weave for 1741187512000
2025/03/05 15:11:52 sending blob(619) to http://localhost:5001/remoteweave
2025/03/05 15:11:52 http://localhost:5001: consolidating signature by http://localhost:5001 for weave for 1741187512000
2025/03/05 15:11:52 sending blob(619) to http://localhost:5002/remoteweave
...
2025/03/05 15:11:52 http://localhost:5001: consolidating signature by http://localhost:5002 for weave for 1741187512000
2025/03/05 15:11:52 http://localhost:5002: consolidating signature by http://localhost:5001 for weave for 1741187512000

Termination Condition

We finally have the conditions in place to "terminate the node cleanly". Some time back, I said we would just "let the node run for a couple of seconds" to finish calculating the blocks. We can now say that, for the harness at least, we can quit the moment the harness has seen the consolidator on each node put its hand up and say "I have generated this weave ID and have collected signatures from all the other nodes".

So let's do that:
    for _, n := range nodes {
        n.ClientsDone()
    }

    handsUp := make([]chan bool, len(nodes))
    for k, n := range config.NodeNames() {
        launcher := config.Launcher(n)
        handsUp[k] = make(chan bool)
        launcher.Consolidator().NotifyMeWhenStable(handsUp[k])
    }
    timeout := time.After(5 * time.Second)
outer:
    for k, c := range handsUp {
        select {
        case <-timeout:
            log.Printf("did not consolidate after 5s")
            break outer
        case worked := <-c:
            log.Printf("consolidator %d notified me: %v", k, worked)
        }
    }

    for _, n := range nodes {
        n.Terminate()
    }

HARNESS_SIGNAL_COMPLETE:cmd/harness/main.go

This creates a slice of channels, one for each of the nodes running under the harness, each of which has a consolidator running. It passes each of the channels to one of the consolidators to write to and stores the channel to read from.

It sets up the backstop of a 5s timer, and then enters a loop waiting for all of the channels to be notified. If the timer fires first, it logs a message and then breaks out of the loop.

There's quite a bit of plumbing (again) to make all this work, but this is the only thing that really matters:
func (wc *WeaveConsolidator) consolidate(ch <-chan WeaveConsolidationCommand) {
    consolidation := make(map[types.Timestamp]*WeaveAndSignatures)
    for {
        cmd := <-ch
        switch v := cmd.(type) {
        case WeaveCreatedLocally:
            log.Printf("%s: consolidating weave for %d\n", wc.onNode, v.when)
            if consolidation[v.when] == nil {
                consolidation[v.when] = &WeaveAndSignatures{when: v.when, id: v.id, signatures: make(map[string]types.Signature)}
            } else {
                log.Printf("cannot create weave for %d more than once\n", v.when)
            }
        case WeaveSigned:
            log.Printf("%s: consolidating signature by %s for weave for %d\n", wc.onNode, v.by, v.when)
            if consolidation[v.when] != nil {
                addSig := consolidation[v.when]
                if addSig.signatures[v.by] != nil {
                    log.Printf("cannot add signature to weave for %d by %s more than once\n", v.when, v.by)
                } else if !addSig.id.Is(v.id) {
                    log.Printf("cannot add signature to weave for %d by %s because hash values do not match\n", v.when, v.by)
                } else {
                    addSig.signatures[v.by] = v.signature
                    if wc.stableChan != nil && len(addSig.signatures) == wc.nodeCount {
                        wc.stableChan <- true
                    }
                }
            } else {
                log.Printf("cannot sign weave for %d yet, because it has not been created locally\n", v.when)
            }
        default:
            log.Printf("there is no case for command %v", v)
        }
    }
}

HARNESS_SIGNAL_COMPLETE:internal/storage/weaveconsolidator.go

The test that the channel is not nil covers two cases: one in which the harness is not involved and this is just an uninteresting piece of code, and the other in which there are still active clients and we know that we cannot possibly be quiescent, so a consolidated weave at this point would not be the final consolidated weave.

And now the whole thing wraps up in under a second:
2025/03/05 17:41:18 consolidator 0 notified me: true
2025/03/05 17:41:18 consolidator 1 notified me: true
...
2025/03/05 17:41:18 elapsed time = 899
2025/03/05 17:41:18 harness complete

Conclusion

Well, that would appear to be "code complete" for ChainLedger. And, certainly if you only want something that runs in memory and works down the "happy path", this is the software for you.

However, the real world is not like that, and all I've really built so far is a testbed which I can use to explore issues with distributed systems. But that's for another time.

As is implementing the "real-life" code that will work on cloud infrastructure.

And beyond that, there will always be more that can be done. When we get there, be sure there will be some "exercises for the reader" (aka I didn't get around to this).

No comments:

Post a Comment