We have received messages from other nodes and successfully unpacked them. Now we need to actually do something with them.
We want to do two things:
- Validate them and check their signatures;
- Store them locally in a journal as we would our own messages.
One thing I do need to point out is that it would seem I misunderstood how bytes.Buffer accounts for the number of bytes remaining to be read. Because I speak English, I assumed that the function I wanted was the one called Available() and that's what I used. It seemed to work and I didn't see any errors. However, having changed the message to include the publisher name, it suddenly started failing. Anyway, long story short, the correct method to call is Len() which, in spite of the name, does not return the length of the buffer, but the number of bytes remaining to be read. The Available() method, contrariwise, returns the remaining available capacity until the byte slice would need to be reallocated. I guess that makes sense when you are writing to the buffer, but not so much (to me) when reading from the buffer.
This then, is the code we genuinely want to add:
package storage
import (
"crypto/rsa"
"log"
"github.com/gmmapowell/ChainLedger/internal/records"
)
type RemoteStorer interface {
Handle(stx *records.StoredTransaction) error
}
type CheckAndStore struct {
key *rsa.PublicKey
journal Journaller
}
func (cas *CheckAndStore) Handle(stx *records.StoredTransaction) error {
log.Printf("asked to check and store remote tx\n")
return nil
}
func NewRemoteStorer(key *rsa.PublicKey, journal Journaller) RemoteStorer {
return &CheckAndStore{key: key, journal: journal}
}
INTERNODE_REMOTE_STORER:internal/storage/remotehandler.go
This is, of course, just a stub. This is mainly just a wrapper around the place where we want to put the code but making sure it's somewhere where we have a pointer to a journal and the public key of the publishing node. We'll come back and fill it in later.And we want to call it from the transaction HTTP handler:
// ServeHTTP implements http.Handler.
func (t *TransactionHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) {
buf, err := io.ReadAll(req.Body)
if err != nil {
log.Printf("could not read the buffer from the request")
return
}
log.Printf("have received an internode request length: %d\n", len(buf))
stx, err := records.UnmarshalBinaryStoredTransaction(buf)
if err != nil {
log.Printf("could not unpack the internode message: %v\n", err)
return
}
log.Printf("unmarshalled message to: %v\n", stx)
publishedBy := stx.Publisher.Signer.String()
storer := t.nodeConfig.RemoteStorer(publishedBy)
if storer == nil {
log.Printf("could not find a handler for remote node %s\n", publishedBy)
return
}
storer.Handle(stx)
}
INTERNODE_REMOTE_STORER:internal/internode/transactionhandler.go
Everything else in this commit is just a question of making sure that we have the name of the publisher and everything is in the right place for that call to RemoteStorer to work.Checking the Signature
So now we have a public key and a StoredTransaction in the same place. There is a very real threat that the transaction we are being given in fraudulent: we have no direct way to verify where it has come from. Yes, I could require the use of "client certificates" but I want a signature anyway to make sure that the chain is permanently verifiable, so I might as well use it now, which also means I have to write that same verifier code.So let's update the handler we stubbed out above. We are simply going to delegate checking the signature to the stored transaction and, if it doesn't match, we will return the error to the handler.
func (cas *CheckAndStore) Handle(stx *records.StoredTransaction) error {
log.Printf("asked to check and store remote tx\n")
err := stx.VerifySignature(cas.hasher, cas.signer, cas.key)
if err != nil {
return err
}
return nil
}
INTERNODE_VERIFY_SIGNATURE:internal/storage/remotehandler.go
This error is "handled" by the HTTP Server panicking:func (t *TransactionHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) {
buf, err := io.ReadAll(req.Body)
if err != nil {
log.Printf("could not read the buffer from the request")
return
}
log.Printf("have received an internode request length: %d\n", len(buf))
stx, err := records.UnmarshalBinaryStoredTransaction(buf)
if err != nil {
log.Printf("could not unpack the internode message: %v\n", err)
return
}
log.Printf("unmarshalled message to: %v\n", stx)
publishedBy := stx.Publisher.Signer.String()
storer := t.nodeConfig.RemoteStorer(publishedBy)
if storer == nil {
log.Printf("could not find a handler for remote node %s\n", publishedBy)
return
}
err = storer.Handle(stx)
if err != nil {
panic(fmt.Sprintf("failed to store remote transaction: %v", err))
}
}
INTERNODE_VERIFY_SIGNATURE:internal/internode/transactionhandler.go
To be clear, this is definitely NOT the right thing to do in this situation. As it happens, panicking in a goroutine just kills that goroutine, so this is not as much the end of the world as you might imagine. On the other hand, it's not a good thing to do.The problem is, what is a good thing to do? Obviously, we can't store a transaction with an invalid signature - it's either incorrect or fraudulent. Assuming that our code is all working correctly, it also shouldn't be incorrect, so fraud is the most likely case. In that case, we don't want to respond to the node which sent the message. All in all, the only thing to do is to forget about it, possibly logging the error. Later, we will discuss the ramifications of nodes not agreeing on what should or should not be included in the blockchain, although resolving these problems will probably be left as an exercise for the reader.
The StoredTransaction needs to check the signature is valid. It's not really enough, though, to just check the signature against the hash, it's important to check that the hash is a valid hash of the transaction information. So we first re-hash all the information in the transaction, then check that this hash matches the transaction ID (which was obtained from hashing the transaction) and then assert that the signature matches the hash based on the public key of the sending node.
func (s *StoredTransaction) VerifySignature(hasher helpers.HasherFactory, signer helpers.Signer, pub *rsa.PublicKey) error {
txid := s.hashMe(hasher)
if !txid.Is(s.TxID) {
return fmt.Errorf("remote txid %s was not the result of computing it locally: %s", s.TxID.String(), txid.String())
}
return signer.Verify(pub, txid, s.Publisher.Signature)
}
INTERNODE_VERIFY_SIGNATURE:internal/records/storedtransaction.go
If you're wondering where that hashMe function came from, it's the same code that we used to hash the transaction and produce the transaction id on the generating node. But, in order to reuse it, I've extracted it from CreateStoredTransaction. This is the new hashMe function (note that it is a function local to this package:func (stx *StoredTransaction) hashMe(hasherFactory helpers.HasherFactory) types.Hash {Once we've extracted that, this is what is left in CreateStoredTransaction:
hasher := hasherFactory.NewHasher()
binary.Write(hasher, binary.LittleEndian, stx.WhenReceived)
hasher.Write([]byte(stx.ContentLink.String()))
hasher.Write([]byte("\n"))
hasher.Write(stx.ContentHash)
for _, v := range stx.Signatories {
hasher.Write([]byte(v.Signer.String()))
hasher.Write([]byte("\n"))
hasher.Write(v.Signature)
}
return hasher.Sum(nil)
}
func CreateStoredTransaction(clock helpers.Clock, hasherFactory helpers.HasherFactory, signer helpers.Signer, nodeKey *rsa.PrivateKey, tx *api.Transaction) (*StoredTransaction, error) {Obviously, we need to provide a Verify method on the Signer interface, which takes the public key of the remote node (we don't have their private key), the hash to verify and the signature the remote node provided. This is then implemented by the RSASigner class.
copyLink := *tx.ContentLink
ret := StoredTransaction{WhenReceived: clock.Time(), ContentLink: ©Link, ContentHash: bytes.Clone(tx.ContentHash), Signatories: make([]*types.Signatory, len(tx.Signatories))}
for i, v := range tx.Signatories {
copySigner := *v.Signer
copySig := types.Signature(bytes.Clone(v.Signature))
signatory := types.Signatory{Signer: ©Signer, Signature: copySig}
ret.Signatories[i] = &signatory
}
ret.TxID = ret.hashMe(hasherFactory)
sig, err := signer.Sign(nodeKey, ret.TxID)
if err != nil {
return nil, err
}
ret.Publisher = &types.Signatory{Signer: signer.SignerName(), Signature: sig}
return &ret, nil
}
type Signer interface {
Sign(pk *rsa.PrivateKey, hash types.Hash) (types.Signature, error)
SignerName() *url.URL
Verify(pub *rsa.PublicKey, hash types.Hash, sig types.Signature) error
}
type RSASigner struct {
Name *url.URL
}
func (s RSASigner) SignerName() *url.URL {
return s.Name
}
func (s *RSASigner) Sign(pk *rsa.PrivateKey, hash types.Hash) (types.Signature, error) {
sig, err := rsa.SignPSS(rand.Reader, pk, crypto.SHA512, []byte(hash), nil)
if err != nil {
return nil, err
}
return sig, nil
}
func (s *RSASigner) Verify(pub *rsa.PublicKey, hash types.Hash, sig types.Signature) error {
return rsa.VerifyPSS(pub, crypto.SHA512, hash, sig, nil)
}
INTERNODE_VERIFY_SIGNATURE:internal/helpers/signer.go
You may have noticed that I have updated RSASigner but not MockSigner. Yes, once again I'm letting the tests lag behind the actual production code because I'm lazy. Add it (along with the appropriate tests of the verification code) and send me a pull request.You may also notice that a number of other things changed slightly: once again, I found myself in need of objects I didn't have and had to refactor parts of the of code to make them available. I'm starting to feel some tension around the messiness of much of this and a desire to go through a much more widespread refactoring to tidy "those kinds of things" up.
Storing the Transaction
Finally, we need to actually remember that this has come in and add it to our list of transactions pending being appropriated into "the next block".func (cas *CheckAndStore) Handle(stx *records.StoredTransaction) error {
log.Printf("asked to check and store remote tx\n")
err := stx.VerifySignature(cas.hasher, cas.signer, cas.key)
if err != nil {
return err
}
return cas.journal.RecordTx(stx);
}
INTERNODE_STORE_REMOTE_TX:internal/storage/remotehandler.go
Yeah, I was surprised it was that easy, too.Conclusions
We have taken the message that arrived and was unpacked in the last episode and checked that the hash and signature are correct and then stored them locally in a storage area specific to that remote node (i.e. the transactions generated by each node are kept in separate journals).Now we can move on to handling the blocks as they arrive from remote nodes.
No comments:
Post a Comment