Pipelining
Pipelining is a paradigm used by Crypto++ which allows data to flow from a source to a sink. As data flows, the data will encounter filters which transform the data in some way en route to the sink. All data can be pumped at once; or data can be throttled to control and limit the flow.
The motivation for the design was the Unix pipe system, so a Crypto++ pipeline has very close analogies to a Unix command.
Unix Commands
Consider a set of commands to Base64 encode a file and then save the encoding to a second file. The commands might look something like below.
cat filename | base64 > filename.b64
The Crypto++ pipeline to accomplish the same would be similar to the following.
FileSource f(filename, new Base64Encoder(new FileSink(filename + ".b64")));
Generalizing the pipeline would be similar to the following. You can use multiple filters, if desired.
// One filter Source s(source, new Filter(new Sink(destination))); // Two filter Source s(source, new Filter(new Filter(new Sink(destination))));
If you have the following snippet of code:
ECB_Mode< AES >::Encryption e; e.SetKey( key, key.size() ); StringSource ss( plain, true, new StreamTransformationFilter( e, new FileSink( cipher.bin ) ) // StreamTransformationFilter ); // StringSource
Then the hypothetical Unix and Linux commands might look like:
cat plain.txt | aes -enc -k key -m ecb > cipher.bin
And if you added a hex encoder:
StringSource ss( plain, true, new StreamTransformationFilter( e, new HexEncoder( new FileSink( "cipher.txt" ) ) // HexEncoder ) // StreamTransformationFilter ); // StringSource
Then the hypothetical Unix and Linux commands might look like:
cat plain.txt | aes -enc -k key -m ecb | hex -e > cipher.txt
Documentation
The Annotated Class Reference includes documentation for the Source interface and Sink interface.
All filters inherit from BufferedTransformation
. With respect to BufferedTransformations
, intermediate objects of interest are:
With respect to Filters, objects of interest are:
- AuthenticatedEncryptionFilter and AuthenticatedDecryptionFilter
- StreamTransformationFilters
- HashFilters
- HashVerificationFilters
- SignerFilters
- SignatureVerificationFilters
Generally, a user defined filter will derive from class Filter
. Wei Dai recommends examining SignerFilter
in filters.h for a filter example.
Ownership
Object ownership is an important detail in a pipeline. According to "Important Usage Notes" in ReadMe.txt:
If a constructor for A takes a pointer to an object B (except primitive types such as int and char), then A owns B and will delete B at A's destruction. If a constructor for A takes a reference to an object B, then the caller retains ownership of B and should not destroy it until A no longer needs it.
That means filters created with new
are owned by the outer or encompassing object and will be destroyed by that object when no longer needed. In the example shown below, the new Sink
is owned by the Filter
, and the the new Filter
is owned by the Source
. The filters created with new
are destroyed when the Source
is destroyed.
Source s(source, new Filter(new Sink(destination)));
The destination
is not destroyed automatically. Only the filters created with new
are destroyed. Once the destructors run and the filters are destroyed, you can still use destination
.
And you should not do this since the filter will be deleted twice. In the code below, the filter f
is deleted when the stack frame exits since f
is a stack variable. f
is also deleted when the pipeline is destroyed since the source owns it.
// Do not do this Filter f(new Sink(destination)); Source s(source, &f);
If you need an object to persist, like f
, then you should use a Redirector
. The Redirector
will stop ownership so the object is not deleted in the pipeline. An example of a filter that you might want to survive is an AuthenticatedDecryptionFilter
so you can inspect the result of decryption without catching an exception.
Filter f(new Sink(destination)); Source s(source, new Redirector(f));
Sample Programs
The following program transfers data from the first string to the second string. Though not very useful, it is the simplest demonstration of pipelining. An important reminder (from the ReadMe) is that the StringSource
, which takes a pointer to the StringSink
, owns the StringSink
. So the StringSource
will delete the StringSink
when the StringSource
destructor is invoked.
string s1 = "Pipeline", s2; StringSource ss( s1, true, new StringSink( s2 ) ); cout << "s1: " << s1 << endl; cout << "s2: " << s2 << endl;
The following program uses an AutoSeededRandomPool
to generate an AES key. The key is hex encoded and then printed.
AutoSeededRandomPool prng; SecByteBlock key(AES::DEFAULT_KEYLENGTH); prng.GenerateBlock( key, key.size() ); string encoded; StringSource ss( key.data(), key.size(), true, new HexEncoder( new StringSink( encoded ) ) // HexEncoder ); // StringSource cout << "key: " << encoded << endl;
At times, a result is required from an intermediate object that is participating in a pipeline. Most notably is the DecodingResult
from a HashVerificationFilter
. In this situation, we do not want the filter to own the object and attempt to destroy it. To accomplish the goal, a Redirector
would be used. Notice that the Redirector
takes a reference to an object, and not a pointer to an object.
CCM< AES, TAG_SIZE >::Decryption d; d.SetKeyWithIV( key, key.size(), iv, sizeof(iv) ); ... AuthenticatedDecryptionFilter df( d, new StringSink( recovered ) ); // AuthenticatedDecryptionFilter // Cipher text includes the MAC tag StringSource ss( cipher, true, new Redirector( df ) ); // StringSource // If the object does not throw, here's the only // opportunity to check the data's integrity bool b = df.GetLastResult(); if( true == b ) { cout << recovered << endl; }
One topic that comes up on occasion is skip'ing source bytes. Skip
is part of the Filter
interface, and it works on the output buffer (more precisely, the AttachedTransformation
). To Skip
bytes on a Source
, then use Pump
and discard the bytes by using a NULL
AttachedTransformation
. Also see Skip'ing on a Source does not work as expected on Stack Overflow and Issue 248: Skip'ing on a Source does not work.
int main(int argc, char* argv[]) { string str1, str2; HexEncoder enc(new StringSink(str1)); for(unsigned int i=0; i < 32; i++) enc.Put((byte)i); enc.MessageEnd(); cout << "str1: " << str1 <<endl; // 'ss' has a NULL AttachedTransformation() StringSource ss(str1, false); ss.Pump(10); // Attach the real filter chain to 'ss' ss.Attach(new StringSink(str2)); ss.PumpAll(); cout << "str2: " << str2 << endl; return 0; }
Another topic that comes up on occasion is manually pumping data. The following example pumps data in 4 KB chunks. A more complete discussion and the example can be found at Pumping Data.
inline bool EndOfFile(const FileSource& file) { std::istream* stream = const_cast<FileSource&>(file).GetStream(); return stream->eof(); } int main(int argc, char* argv[]) { try { byte key[AES::DEFAULT_KEYLENGTH]={}, iv[AES::BLOCKSIZE]={}; CTR_Mode<AES>::Encryption encryptor; encryptor.SetKeyWithIV(key, sizeof(key), iv); MeterFilter meter; StreamTransformationFilter filter(encryptor); FileSource source("plain.bin", false); FileSink sink("cipher.bin"); source.Attach(new Redirector(filter)); filter.Attach(new Redirector(meter)); meter.Attach(new Redirector(sink)); const word64 BLOCK_SIZE = 4096; word64 processed = 0; while(!EndOfFile(source) && !source.SourceExhausted()) { source.Pump(BLOCK_SIZE); filter.Flush(false); processed += BLOCK_SIZE; if (processed % (1024*1024*10) == 0) cout << "Processed: " << meter.GetTotalBytes() << endl; } // Signal there is no more data to process. // The dtor's will do this automatically. filter.MessageEnd(); } catch(const Exception& ex) { cerr << ex.what() << endl; } return 0; }
Downloads
No downloads.