MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it's faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.
Next:
MessagePack is supported by over 50 programming languages and environments. See list of implementations.
Redis scripting has support for MessagePack because it is a fast and compact serialization format with a simple to implement specification. I liked it so much that I implemented a MessagePack C extension for Lua just to include it into Redis.
Salvatore Sanfilippo, creator of Redis
Fluentd uses MessagePack for all internal data representation. It's crazy fast because of zero-copy optimization of msgpack-ruby. Now MessagePack is an essential component of Fluentd to achieve high performance and flexibility at the same time.
Sadayuki Furuhashi, creator of Fluentd
Treasure Data built a multi-tenant database optimized for analytical queries using MessagePack. The schemaless database is growing by billions of records every month. We also use MessagePack as a glue between components. Actually we just wanted a fast replacement of JSON, and MessagePack is simply useful.
Kazuki Ohta, CTO
MessagePack has been simply invaluable to us. We use MessagePack + Memcache to cache many of our feeds on Pinterest. These feeds are compressed and very quick to unpack thanks to MessagePack while Memcache gives us fast atomic pushes.
MessagePack is a binary-based JSON-like serialization library.
MessagePack for D is a pure D implementation of MessagePack.
Features
Small size and High performance
Zero copy serialization / deserialization
Streaming deserializer for non-contiguous IO situation
Supports D features (Ranges, Tuples, real type)
Note: The real type is only supported in D.
Don't use the real type when communicating with other programming languages.
Note that Unpacker will raise an exception if a loss of precision occurs.
Current Limitations
No circular references support
If you want to use the LDC compiler, you need at least version 0.15.2 beta2
Install
Use dub to add it as a dependency:
% dub install msgpack-d
Usage
Example code can be found in the example directory.
msgpack-d is very simple to use. Use pack for serialization, and unpack for deserialization:
importstd.file;
import msgpack;
structS { int x; float y; string z; }
voidmain()
{
S input = S(10, 25.5, "message");
// serialize dataubyte[] inData = pack(input);
// write data to a file
write("file.dat", inData);
// read data from a fileubyte[] outData = cast(ubyte[])read("file.dat");
// unserialize the data
S target = outData.unpack!S();
// verify data is the sameassert(target.x == input.x);
assert(target.y == input.y);
assert(target.z == input.z);
}
Feature: Skip serialization/deserialization of a specific field.
Use the @nonPacked attribute:
structUser
{
string name;
@nonPacked int level; // pack / unpack will ignore the 'level' field
}
Feature: Use your own serialization/deserialization routines for custom class and struct types.
msgpack-d provides the functions registerPackHandler / registerUnpackHandler to allow you
to use custom routines during the serialization or deserialization of user-defined class and struct types.
This feature is especially useful when serializing a derived class object when that object is statically
typed as a base class object.
For example:
classDocument { }
classXmlDocument : Document
{
this() { }
this(string name) { this.name = name; }
string name;
}
voidxmlPackHandler(ref Packer p, ref XmlDocument xml)
{
p.pack(xml.name);
}
voidxmlUnpackHandler(ref Unpacker u, ref XmlDocument xml)
{
u.unpack(xml.name);
}
voidmain()
{
/// Register the 'xmlPackHandler' and 'xmlUnpackHandler' routines for/// XmlDocument object instances.
registerPackHandler!(XmlDocument, xmlPackHandler);
registerUnpackHandler!(XmlDocument, xmlUnpackHandler);
/// Now we can serialize/deserialize XmlDocument object instances via a/// base class reference.Document doc = new XmlDocument("test.xml");
auto data = pack(doc);
XmlDocument xml = unpack!XmlDocument(data);
assert(xml.name =="test.xml"); // xml.name is "test.xml"
}
The PackerImpl / Unpacker / StreamingUnpacker types
These types are used by the pack and unpack functions.
MessagePack is an efficient binary serialization format.
It lets you exchange data among multiple languages like JSON.
But it's faster and smaller.
This package provides CPython bindings for reading and writing MessagePack data.
Very important notes for existing users
PyPI package name
TL;DR: When upgrading from msgpack-0.4 or earlier, don't do pip install -U msgpack-python.
Do pip uninstall msgpack-python; pip install msgpack instead.
Package name on PyPI was changed to msgpack from 0.5.
I upload transitional package (msgpack-python 0.5 which depending on msgpack)
for smooth transition from msgpack-python to msgpack.
Sadly, this doesn't work for upgrade install. After pip install -U msgpack-python,
msgpack is removed and import msgpack fail.
Deprecating encoding option
encoding and unicode_errors options are deprecated.
In case of packer, use UTF-8 always. Storing other than UTF-8 is not recommended.
For backward compatibility, you can use use_bin_type=False and pack bytes
object into msgpack raw type.
In case of unpacker, there is new raw option. It is True by default
for backward compatibility, but it is changed to False in near future.
You can use raw=False instead of encoding='utf-8'.
Planned backward incompatible changes
When msgpack 1.0, I planning these breaking changes:
packer and unpacker: Remove encoding and unicode_errors option.
packer: Change default of use_bin_type option from False to True.
unpacker: Change default of raw option from True to False.
unpacker: Reduce all max_xxx_len options for typical usage.
unpacker: Remove write_bytes option from all methods.
To avoid these breaking changes breaks your application, please:
Don't use deprecated options.
Pass use_bin_type and raw options explicitly.
If your application handle large (>1MB) data, specify max_xxx_len options too.
Install
$ pip install msgpack
PyPy
msgpack provides a pure Python implementation. PyPy can use this.
Windows
When you can't use a binary distribution, you need to install Visual Studio
or Windows SDK on Windows.
Without extension, using pure Python implementation on CPython runs slowly.
You should always specify the use_list keyword argument for backward compatibility.
See performance issues relating to use_list option below.
Read the docstring for other options.
Streaming unpacking
Unpacker is a "streaming unpacker". It unpacks multiple objects from one
stream (or from bytes provided through its feed method).
import msgpack
from io import BytesIO
buf = BytesIO()
for i inrange(100):
buf.write(msgpack.packb(i, use_bin_type=True))
buf.seek(0)
unpacker = msgpack.Unpacker(buf, raw=False)
for unpacked in unpacker:
print(unpacked)
Packing/unpacking of custom data type
It is also possible to pack/unpack custom data types. Here is an example for
datetime.datetime.
As an alternative to iteration, Unpacker objects provide unpack,
skip, read_array_header and read_map_header methods. The former two
read an entire message from the stream, respectively de-serialising and returning
the result, or ignoring it. The latter two methods return the number of elements
in the upcoming container, so that each element in an array, or key-value pair
in a map, can be unpacked or skipped individually.
Each of these methods may optionally write the packed data it reads to a
callback function:
from io import BytesIO
defdistribute(unpacker, get_worker):
nelems = unpacker.read_map_header()
for i inrange(nelems):
# Select a worker for the given key
key = unpacker.unpack()
worker = get_worker(key)
# Send the value as a packed message to worker
bytestream = BytesIO()
unpacker.skip(bytestream.write)
worker.send(bytestream.getvalue())
Notes
string and binary type
Early versions of msgpack didn't distinguish string and binary types (like Python 1).
The type for representing both string and binary types was named raw.
For backward compatibility reasons, msgpack-python will still default all
strings to byte strings, unless you specify the use_bin_type=True option in
the packer. If you do so, it will use a non-standard type called bin to
serialize byte arrays, and raw becomes to mean str. If you want to
distinguish bin and raw in the unpacker, specify raw=False.
Note that Python 2 defaults to byte-arrays over Unicode strings:
You can use it with default and ext_hook. See below.
Note about performance
GC
CPython's GC starts when growing allocated object.
This means unpacking may cause useless GC.
You can use gc.disable() when unpacking large message.
use_list option
List is the default sequence type of Python.
But tuple is lighter than list.
You can use use_list=False while unpacking when performance is important.
Python's dict can't use list as key and MessagePack allows array for key of mapping.
use_list=False allows unpacking such message.
Another way to unpacking such object is using object_pairs_hook.
Development
Test
MessagePack uses pytest for testing.
Run test with following command:
Only in packing. Atoms are packed as binaries. Default value is pack.
Otherwise, any term including atoms throws badarg.
{known_atoms, [atom()]}
Both in packing and unpacking. In packing, if an atom is in this list
a binary is encoded as a binary. In unpacking, msgpacked binaries are
decoded as atoms with erlang:binary_to_existing_atom/2 with encoding
utf8. Default value is an empty list.
Even if allow_atom is none, known atoms are packed.
{unpack_str, as_binary|as_list}
A switch to choose decoded term style of str type when unpacking.
Only available at new spec. Default is as_list.
mode as_binary as_list
-----------+------------+-------
bin binary() binary()
str binary() string()
{validate_string, boolean()}
Only in unpacking, UTF-8 validation at unpacking from str type will
be enabled. Default value is false.
{pack_str, from_binary|from_list|none}
A switch to choose packing of string() when packing. Only available
at new spec. Default is from_list for symmetry with unpack_str
option.
mode from_list from_binary none
-----------+------------+--------------+-----------------
binary() bin str*/bin bin
string() str*/array array of int array of int
list() array array array
But the default option pays the cost of performance for symmetry. If
the overhead of UTF-8 validation is unacceptable, choosing none as
the option would be the best.
* Tries to pack as str if it is a valid string().
{map_format, map|jiffy|jsx}
Both at packing and unpacking. Default value is map.
At both. The default behaviour in case of facing ext data at decoding
is to ignore them as its length is known.
Now msgpack-erlang supports ext type. Now you can serialize everything
with your original (de)serializer. That will enable us to handle
erlang- native types like pid(), ref() contained in tuple(). See
test/msgpack_ext_example_tests.erl for example code.
The Float type of Message Pack represents IEEE 754 floating point number, so it includes Nan and Infinity.
In unpacking, msgpack-erlang returns nan, positive_infinity and negative_infinity.
License
Apache License 2.0
Release Notes
0.7.0
Support nan, positive_infinity and negative_infinity
0.6.0
Support OTP 19.0
0.5.0
Renewed optional arguments to pack/unpack interface. This is
incompatible change from 0.4 series.
0.4.0
Deprecate nil
Moved to rebar3
Promote default map unpacker as default format when OTP is >= 17
Added QuickCheck tests
Since this version OTP older than R16B03-1 are no more supported
0.3.5 / 0.3.4
0.3 series will be the last versions that supports R16B or older
versions of OTP.
OTP 18.0 support
Promote default map unpacker as default format when OTP is >= 18
0.3.3
Add OTP 17 series to Travis-CI tests
Fix wrong numbering for ext types
Allow packing maps even when {format,map} is not set
Fix Dialyzer invalid contract warning
Proper use of null for jiffy-style encoding/decoding
0.3.2
set back default style as jiffy
fix bugs around nil/null handling
0.3.0
supports map new in 17.0
jiffy-style maps will be deprecated in near future
set default style as map
0.2.8
0.2 series works with OTP 17.0, R16, R15, and with MessagePack's new
and old format. But does not support map type introduced in
OTP 17.0.
Here is a list of sbt commands for daily development:
> ~compile # Compile source codes
> ~test:compile # Compile both source and test codes
> ~test # Run tests upon source code change
> ~test-only *MessagePackTest # Run tests in the specified class
> ~test-only *MessagePackTest -- -n prim # Run the test tagged as "prim"
> project msgpack-scala # Focus on a specific project
> package # Create a jar file in the target folder of each project
> scalafmt # Reformat source codes
> ; coverage; test; coverageReport; coverageAggregate; # Code coverage
Publishing
> publishLocal # Install to local .ivy2 repository
> publish # Publishing a snapshot version to the Sonatype repository
> release # Run the release procedure (set a new version, run tests, upload artifacts, then deploy to Sonatype)
For publishing to Maven central, msgpack-scala uses sbt-sonatype plugin. Set Sonatype account information (user name and password) in the global sbt settings. To protect your password, never include this file in your project.
This is MessagePack serialization/deserialization for CLI (Common Language Infrastructure) implementations such as .NET Framework, Silverlight, Mono (including Moonlight.)
This library can be used from ALL CLS compliant languages such as C#, F#, Visual Basic, Iron Python, Iron Ruby, PowerShell, C++/CLI or so.
Usage
You can serialize/deserialize objects as following:
Create serializer via MessagePackSerializer.Get generic method. This method creates dependent types serializers as well.
Invoke serializer as following:
** Pack method with destination Stream and target object for serialization.
** Unpack method with source Stream.
// Creates serializer.varserializer=MessagePackSerializer.Get<T>();
// Pack obj to stream.serializer.Pack(stream, obj);
// Unpack from stream.varunpackedObject=serializer.Unpack(stream);
' Creates serializer.Dimserializer=MessagePackSerializer.Get(OfT)()' Pack obj to stream.serializer.Pack(stream,obj)' Unpack from stream.DimunpackedObject=serializer.Unpack(stream)
For production environment, you should instantiate own SerializationCOntext and manage its lifetime. It is good idea to treat it as singleton because SerializationContext is thread-safe.
Features
Fast and interoperable binary format serialization with simple API.
Generating pre-compiled assembly for rapid start up.
Flexible MessagePackObject which represents MessagePack type system naturally.
Note: AOT support is limited yet. Use serializer pre-generation with mpu -s utility or API.
If you do not pre-generated serializers, MsgPack for CLI uses reflection in AOT environments, it is slower and it sometimes causes AOT related error (ExecutionEngineException for runtime JIT compilation).
For mono, you can use net461 or net35 drops as you run with.
For Unity, unity3d drop is suitable.
How to build
For .NET Framework
Install Visual Studio 2017 (Community edition is OK) and 2015 (for MsgPack.Windows.sln).
You must install .NET Framework 3.5, 4.x, .NET Core, and Xamarin dev tools to build all builds successfully.
If you do not want to install options, edit <TargetFrameworks> element in *.csproj files to exclude platforms you want to exclude.
Or open one of above solution files in your IDE and run build command in it.
For Mono
Install latest Mono and .NET Core SDK.
Now, you can build MsgPack.sln and MsgPack.Xamarin.sln with above instructions and msbuild in latest Mono. Note that xbuild does not work because it does not support latest csproj format.
Own Unity 3D Build
First of all, there are binary drops on github release page, you should use it to save your time.
Because we will not guarantee source code organization compatibilities, we might add/remove non-public types or members, which should break source code build.
If you want to import sources, you must include just only described on MsgPack.Unity3D.csproj.
If you want to use ".NET 2.0 Subset" settings, you must use just only described on MsgPack.Unity3D.CorLibOnly.csproj file, and define CORLIB_ONLY compiler constants.
Xamarin Android testing
If you run on Windows, it is recommended to use HXM instead of Hyper-V based emulator.
You can disable Hyper-V from priviledged (administrator) powershell as follows:
An error occurred while running unit test project.
Rebuild the project and rerun it. Or, login your Mac again, ant retry it.
It is hard to read English.
You can read localized Xamarin docs with putting {region}-{lang} as the first component of URL path such as https://developer.xamarin.com/ja-jp/guides/....
MessagePack is an efficient binary serialization
format, which lets you exchange data among multiple languages like JSON,
except that it's faster and smaller. Small integers are encoded into a
single byte and short strings require only one extra byte in
addition to the strings themselves.
Example
In C:
#include<msgpack.h>
#include<stdio.h>intmain(void)
{
/* msgpack::sbuffer is a simple buffer implementation. */
msgpack_sbuffer sbuf;
msgpack_sbuffer_init(&sbuf);
/* serialize values into the buffer using msgpack_sbuffer_write callback function. */
msgpack_packer pk;
msgpack_packer_init(&pk, &sbuf, msgpack_sbuffer_write);
msgpack_pack_array(&pk, 3);
msgpack_pack_int(&pk, 1);
msgpack_pack_true(&pk);
msgpack_pack_str(&pk, 7);
msgpack_pack_str_body(&pk, "example", 7);
/* deserialize the buffer into msgpack_object instance. *//* deserialized object is valid during the msgpack_zone instance alive. */
msgpack_zone mempool;
msgpack_zone_init(&mempool, 2048);
msgpack_object deserialized;
msgpack_unpack(sbuf.data, sbuf.size, NULL, &mempool, &deserialized);
/* print the deserialized object. */msgpack_object_print(stdout, deserialized);
puts("");
msgpack_zone_destroy(&mempool);
msgpack_sbuffer_destroy(&sbuf);
return0;
}
#include<msgpack.hpp>
#include<string>
#include<iostream>
#include<sstream>intmain(void)
{
msgpack::type::tuple<int, bool, std::string> src(1, true, "example");
// serialize the object into the buffer.// any classes that implements write(const char*,size_t) can be a buffer.
std::stringstream buffer;
msgpack::pack(buffer, src);
// send the buffer ...
buffer.seekg(0);
// deserialize the buffer into msgpack::object instance.
std::string str(buffer.str());
msgpack::object_handle oh =
msgpack::unpack(str.data(), str.size());
// deserialized object is valid during the msgpack::object_handle instance is alive.
msgpack::object deserialized = oh.get();
// msgpack::object supports ostream.
std::cout << deserialized << std::endl;
// convert msgpack::object instance into the original type.// if the type is mismatched, it throws msgpack::type_error exception.
msgpack::type::tuple<int, bool, std::string> dst;
deserialized.convert(dst);
// or create the new instance
msgpack::type::tuple<int, bool, std::string> dst2 =
deserialized.as<msgpack::type::tuple<int, bool, std::string> >();
return0;
}
When you use msgpack on C++, you can just add
msgpack-c/include to your include path:
g++ -I msgpack-c/include your_source_file.cpp
If you want to use C version of msgpack, you need to build it. You can
also install the C and C++ versions of msgpack.
Building and InstallingInstall from git repository
Using the Terminal (CLI)
You will need:
gcc >= 4.1.0
cmake >= 2.8.0
C and C++03:
$ git clone https://github.com/msgpack/msgpack-c.git
$ cd msgpack-c
$ cmake .
$ make
$ sudo make install
If you want to setup C++11 or C++17 version of msgpack instead,
execute the following commands:
$ git clone https://github.com/msgpack/msgpack-c.git
$ cd msgpack-c
$ cmake -DMSGPACK_CXX[11|17]=ON .
$ sudo make install
MSGPACK_CXX[11|17] flags are not affected to installing files. Just switching test cases. All files are installed in every settings.
When you use the C part of msgpack-c, you need to build and link the library. By default, both static/shared libraries are built. If you want to build only static library, set BUILD_SHARED_LIBS=OFF to cmake. If you want to build only shared library, set `BUILD_SHARED_L
Installer squeaksource
project:'MessagePack';
install:'ConfigurationOfMessagePack'.
(Smalltalkat:#ConfigurationOfMessagePack) project development load
Pharo
Gofer it
smalltalkhubUser:'MasashiUmezawa'project:'MessagePack';
configuration;
load.
(Smalltalkat:#ConfigurationOfMessagePack) project development load
You might need MpTypeMapper initializeAll on new encoder/decoder-related updates.
MessagePack for Actionscript3 (Flash, Flex and AIR).
as3-msgpack was designed to work with the interfaces IDataInput and IDataOutput, thus the API might be easily connected with the native classes that handle binary data (such as ByteArray, Socket, FileStream and URLStream).
Moreover, as3-msgpack is capable of decoding data from binary streams.
Get started: http://loteixeira.github.io/lib/2013/08/19/as3-msgpack/
Basic usage (encoding/decoding):
// create messagepack objectvar msgpack:MsgPack =new MsgPack();// encode an arrayvarbytes:ByteArray= msgpack.write([1, 2, 3, 4, 5]);// rewind the bufferbytes.position=0;// print the decoded objecttrace(msgpack.read(bytes));
This extension provide API for communicating with MessagePack serialization.
MessagePack is a binary-based efficient object serialization library.
It enables to exchange structured objects between many languages like JSON.
But unlike JSON, it is very fast and small.
Requirement
PHP 5.0 +
Install
Install from PECL
Msgpack is an PECL extension, thus you can simply install it by:
pecl install msgpack
Compile Msgpack from source
$/path/to/phpize
$./configure
$make && make install
To enable your own data structures to be automatically serialized from and to
msgpack, derive from Encodable and Decodable as shown
in the following example:
This is an implementation of MessagePack for
R6RS Scheme.
API references
Function (pack! bv message) Function (pack! bv message offset)
Pack message to message pack format bytevector and put it into the
bv destructively. Given bv must have enough length to hold the message.
Optional argument offset indicates where to start with, default is 0.
Function (pack message)
The same as pack! but this one creates a new bytevector.
Function (pack-size message)
Calculate the converted message size.
Function (unpack bv) Function (unpack bv offset)
Unpack the given message format bytevector to Scheme object.
Optional argument offset indicates where to start with, default is 0.
Function (get-unpack in)
Unpack the given binary input port to Scheme object.
Conversion rules
As you already know, Scheme doesn't have static types so the conversion of
Scheme objects to message pack data might cause unexpected results. To avoid
it, I will describe how conversion works.
Scheme to message packInteger conversion
The library automatically decides proper size. More specifically, if it
can fit to message pack's fixnum then library uses it, so are uint8-64.
If the number is too big, then an error is raised. Users must know it tries
to use uint as much as possible. If the given number was negative then
sint will be used.
Floating point conversion
Unfortunately R6RS doesn't have difference between float and double. So
when flonum is given then it always converts to double number.
Collection conversion
Message pack has collections which are map and array. And these are associated
with alist (association list) and vector respectively. When you want to convert
alist to message pack data, then you need to make sure the cdr part will be
the data and if you put (("key" "value))_ then it will be converted to nested
map.
The collection size calculation is done automatically. It tries to use the
smallest size.
Message pack to Scheme
The other way around is easy, it can simply restore the byte data to Scheme
object. Following describes the conversion rules;
u-msgpack-python is a lightweight MessagePack serializer and deserializer module written in pure Python, compatible with both Python 2 and 3, as well CPython and PyPy implementations of Python. u-msgpack-python is fully compliant with the latest MessagePack specification.
NOTE: The standard method for encoding integers in msgpack is to use the most compact representation possible, and to encode negative integers as signed ints and non-negative numbers as unsigned ints.
For compatibility with other implementations, I'm following this convention. On the unpacking side, every integer type becomes an Int64 in Julia, unless it doesn't fit (ie. values greater than 2^63 are unpacked as Uint64).
I might change this at some point, and/or provide a way to control the unpacked types.
The Extension Type
The MsgPack spec defines the extension type to be a tuple of (typecode, bytearray) where typecode is an application-specific identifier for the data in bytearray. MsgPack.jl provides support for the extension type through the Ext immutable.
It is defined like so
struct Ext
typecode::Int8
data::Vector{Uint8}end
and used like this
julia> a = [0x34, 0xff, 0x76, 0x22, 0xd3, 0xab]
6-element Array{UInt8,1}:0x340xff0x760x220xd30xab
julia> b =Ext(22, a)
MsgPack.Ext(22,UInt8[0x34,0xff,0x76,0x22,0xd3,0xab])
julia> p =pack(b)
9-element Array{UInt8,1}:0xc70x060x160x340xff0x760x220xd30xab
julia> c =unpack(p)
MsgPack.Ext(22,UInt8[0x34,0xff,0x76,0x22,0xd3,0xab])
julia> c == b
true
MsgPack reserves typecodes in the range [-128, -1] for future types specified by the MsgPack spec. MsgPack.jl enforces this when creating an Ext but if you are packing an implementation defined extension type (currently there are none) you can pass impltype=true.
julia>Ext(-43, Uint8[1, 5, 3, 9])
ERROR: MsgPack Ext typecode -128 through -1 reserved by implementation
in call at /Users/sean/.julia/v0.4/MsgPack/src/MsgPack.jl:48
julia>Ext(-43, Uint8[1, 5, 3, 9], impltype=true)
MsgPack.Ext(-43,UInt8[0x01,0x05,0x03,0x09])
Serialization
MsgPack.jl also defines the extserialize and extdeserialize convenience functions. These functions can turn an arbitrary object into an Ext and vice-versa.
julia>mutable struct Point{T}
x::T
y::Tend
julia> r =Point(2.5, 7.8)
Point{Float64}(2.5,7.8)
julia> e = MsgPack.extserialize(123, r)
MsgPack.Ext(123,UInt8[0x11,0x01,0x02,0x05,0x50,0x6f,0x69,0x6e,0x74,0x23 … 0x40,0x0e,0x33,0x33,0x33,0x33,0x33,0x33,0x1f,0x40])
julia> s = MsgPack.extdeserialize(e)
(123,Point{Float64}(2.5,7.8))
julia> s[2]
Point{Float64}(2.5,7.8)
julia> r
Point{Float64}(2.5,7.8)
Since these functions use serialize under the hood they are subject to the following caveat.
In general, this process will not work if the reading and writing are done by
different versions of Julia, or an instance of Julia with a different system
image.
clojure-msgpack is a lightweight and simple library for converting
between native Clojure data structures and MessagePack byte formats.
clojure-msgpack only depends on Clojure itself; it has no third-party
dependencies.
Installation
Usage
Basic
pack: Serialize object as a sequence of java.lang.Bytes.
clojure-msgpack provides a streaming API for situations where it is more
convenient or efficient to work with byte streams instead of fixed byte arrays
(e.g. size of object is not known ahead of time).
The streaming counterpart to msgpack.core/pack is msgpack.core/pack-stream
which returns nil and accepts either
java.io.OutputStream
or
java.io.DataOutput
as an additional argument.
Serializing a value of unrecognized type will fail with IllegalArgumentException. See Application types if you want to register your own types.
Clojure types
Some native Clojure types don't have an obvious MessagePack counterpart. We can
serialize them as Extended types. To enable automatic conversion of these
types, load the clojure-extensions library.
(msg/pack:hello)
; => IllegalArgumentException No implementation of method: :pack-stream of; protocol: #'msgpack.core/Packable found for class: clojure.lang.Keyword; clojure.core/-cache-protocol-fn (core _deftype.clj:544)
Note: No error is thrown if an unpacked value is reserved under the old spec
but defined under the new spec. We always deserialize something if we can
regardless of compatibility-mode.
Portable: Depends only on the required components of the SML Basis Library specification.
Composable: Composable combinators for encoding and decoding.
Usage
MLton and MLKit
Include mlmsgpack.mlb in your MLB file.
Poly/ML
From the interactive shell, use .sml files in the following order.
mlmsgpack-aux.sml
realprinter-default.sml
mlmsgpack.sml
SML/NJ
Use mlmsgpack.cm.
Moscow ML
From the interactive shell, use .sml files in the following order.
large.sml
mlmsgpack-aux.sml
realprinter-fail.sml
mlmsgpack.sml
Makefile.mosml is also provided.
HaMLet
From the interactive shell, use .sml files in the following order.
mlmsgpack-aux.sml
realprinter-fail.sml
mlmsgpack.sml
Alice ML
Makefile.alice is provided.
make -f Makefile.alice
alicerun mlmsgpack-test
SML#
For separate compilation, .smi files are provided. Require mlmsgpack.smi from your .smi file.
From the interactive shell, use .sml files in the following order.
mlmsgpack-aux.sml
realprinter-default.sml
mlmsgpack.sml
Tutorial
See TUTORIAL.md.
Known Problems
Our recommendation is MLton, MLKit, Poly/ML and SML#(>=2.0.0) as all tests passed on them.
SML/NJ and Moscow ML are fine if you don't use real values.
SML/NJ
Packing real values fail or produces imprecise results in some cases.
Moscow ML
Packing real values is not supported, since some components of the SML Basis Library are not provided.
HaMLet
Packing real values is not supported, since some components of the SML Basis Library are not provided.
Some functions are very slow, although they work properly. (We tested HaMLet compiled with MLton.)
Alice ML
Packing real values is not supported, since some components of the SML Basis Library are not provided.
Also, some unit tests fail.
SML#
Most functions do not work properly because of bugs of SML# prior to version 2.0.0.
See Also
There already exists another MessagePack implemenatation for SML,
called MsgPack-SML, which is targeted for MLton.
CMP is a C implementation of the MessagePack serialization format. It
currently implements version 5 of the MessagePack
Spec.
CMP's goal is to be lightweight and straightforward, forcing nothing on the
programmer.
License
While I'm a big believer in the GPL, I license CMP under the MIT license.
Example Usage
The following examples use a file as the backend, and are modeled after the
examples included with the msgpack-c project.
#include<stdbool.h>
#include<stdint.h>
#include<stdio.h>
#include<stdlib.h>
#include"cmp.h"staticboolread_bytes(void *data, size_t sz, FILE *fh) {
returnfread(data, sizeof(uint8_t), sz, fh) == (sz * sizeof(uint8_t));
}
staticboolfile_reader(cmp_ctx_t *ctx, void *data, size_t limit) {
returnread_bytes(data, limit, (FILE *)ctx->buf);
}
staticboolfile_skipper(cmp_ctx_t *ctx, size_t count) {
returnfseek((FILE *)ctx->buf, count, SEEK_CUR);
}
staticsize_tfile_writer(cmp_ctx_t *ctx, constvoid *data, size_t count) {
returnfwrite(data, sizeof(uint8_t), count, (FILE *)ctx->buf);
}
voiderror_and_exit(constchar *msg) {
fprintf(stderr, "%s\n\n", msg);
exit(EXIT_FAILURE);
}
intmain(void) {
FILE *fh = NULL;
cmp_ctx_t cmp;
uint32_t array_size = 0;
uint32_t str_size = 0;
char hello[6] = {0, 0, 0, 0, 0, 0};
char message_pack[12] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
fh = fopen("cmp_data.dat", "w+b");
if (fh == NULL)
error_and_exit("Error opening data.dat");
cmp_init(&cmp, fh, file_reader, file_skipper, file_writer);
if (!cmp_write_array(&cmp, 2))
error_and_exit(cmp_strerror(&cmp));
if (!cmp_write_str(&cmp, "Hello", 5))
error_and_exit(cmp_strerror(&cmp));
if (!cmp_write_str(&cmp, "MessagePack", 11))
error_and_exit(cmp_strerror(&cmp));
rewind(fh);
if (!cmp_read_array(&cmp, &array_size))
error_and_exit(cmp_strerror(&cmp));
/* You can read the str byte size and then read str bytes... */if (!cmp_read_str_size(&cmp, &str_size))
error_and_exit(cmp_strerror(&cmp));
if (str_size > (sizeof(hello) - 1))
error_and_exit("Packed 'hello' length too long\n");
if (!read_bytes(hello, str_size, fh))
error_and_exit(cmp_strerror(&cmp));
/* * ...or you can set the maximum number of bytes to read and do it all in * one call*/
str_size = sizeof(message_pack);
if (!cmp_read_str(&cmp, message_pack, &str_size))
error_and_exit(cmp_strerror(&cmp));
printf("Array Length: %u.\n", array_size);
printf("[\"%s\", \"%s\"]\n", hello, message_pack);
fclose(fh);
return EXIT_SUCCESS;
}
Advanced Usage
See the examples folder.
Fast, Lightweight, Flexible, and Robust
CMP uses no internal buffers; conversions, encoding and decoding are done on
the fly.
CMP's source and header file together are ~4k LOC.
CMP makes no heap allocations.
CMP uses standardized types rather than declaring its own, and it depends only
on stdbool.h, stdint.h and string.h.
CMP is written using C89 (ANSI C), aside, of course, from its use of
fixed-width integer types and bool.
On the other hand, CMP's test suite requires C99.
CMP only requires the programmer supply a read function, a write function, and
an optional skip function. In this way, the programmer can use CMP on memory,
files, sockets, etc.
CMP is portable. It uses fixed-width integer types, and checks the endianness
of the machine at runtime before swapping bytes (MessagePack is big-endian).
CMP provides a fairly comprehensive error reporting mechanism modeled after
errno and strerror.
CMP is thread aware; while contexts cannot be shared between threads, each
thread may use its own context freely.
CMP is tested using the MessagePack test suite as well as a large set of custom
test cases. Its small test program is compiled with clang using -Wall -Werror -Wextra ... along with several other flags, and generates no compilation
errors in either clang or GCC.
CMP's source is written as readably as possible, using explicit, descriptive
variable names and a consistent, clear style.
CMP's source is written to be as secure as possible. Its testing suite checks
for invalid values, and data is always treated as suspect before it passes
validation.
CMP's API is designed to be clear, convenient and unsurprising. Strings are
null-terminated, binary data is not, error codes are clear, and so on.
CMP provides optional backwards compatibility for use with other MessagePack
implementations that only implement version 4 of the spec.
Building
There is no build system for CMP. The programmer can drop cmp.c and cmp.h
in their source tree and modify as necessary. No special compiler settings are
required to build it, and it generates no compilation errors in either clang or
gcc.
Versioning
CMP's versions are single integers. I don't use semantic versioning because
I don't guarantee that any version is completely compatible with any other. In
general, semantic versioning provides a false sense of security. You should be
evaluating compatibility yourself, not relying on some stranger's versioning
convention.
Stability
I only guarantee stability for versions released on
the releases page. While rare, both master and develop
branches may have errors or mismatched versions.
Backwards Compatibility
Version 4 of the MessagePack spec has no BIN type, and provides no STR8
marker. In order to remain backwards compatible with version 4 of MessagePack,
do the following:
Avoid these functions:
cmp_write_bin
cmp_write_bin_marker
cmp_write_str8_marker
cmp_write_str8
cmp_write_bin8_marker
cmp_write_bin8
cmp_write_bin16_marker
cmp_write_bin16
cmp_write_bin32_marker
cmp_write_bin32
Use these functions in lieu of their v5 counterparts:
cmp_write_str_marker_v4 instead of cmp_write_str_marker
Msgpack for HHVM, It is a msgpack binding for HHVM
API
msgpack_pack(mixed $input) : string;
pack a input to msgpack, object and resource are not supported, array and other types supported,
false on failure.
msgpack_unpack(string $pac) : mixed;
unpack a msgpack.
Installation
$ git clone https://github.com/reeze/msgpack-hhvm --depth=1
$ cd msgpack-hhvm
$ hphpize && cmake .&& make
$ cp msgpack.so /path/to/your/hhvm/ext/dir
If you don't have hphpize program, please intall package hhvm-dev
This Jackson extension library handles reading and writing of data encoded in MessagePack data format.
It extends standard Jackson streaming API (JsonFactory, JsonParser, JsonGenerator), and as such works seamlessly with all the higher level data abstractions (data binding, tree model, and pluggable extensions).
Maven dependency
To use this module on Maven-based projects, use following dependency:
Decodes buf from in msgpack. buf can be a Buffer or a bl instance.
In order to support a stream interface, a user must pass in a bl instance.
registerEncoder(check(obj), encode(obj))
Register a new custom object type for being automatically encoded.
The arguments are:
check, a function that will be called to check if the passed
object should be encoded with the encode function
encode, a function that will be called to encode an object in binary
form; this function must return a Buffer which include the same type
for registerDecoder.
registerDecoder(type, decode(buf))
Register a new custom object type for being automatically decoded.
The arguments are:
type, is a greater than zero integer identificating the type once serialized
decode, a function that will be called to decode the object from
the passed Buffer
Register a new custom object type for being automatically encoded and
decoded. The arguments are:
type, is a greater than zero integer identificating the type once serialized
constructor, the function that will be used to match the objects
with instanceof
encode, a function that will be called to encode an object in binary
form; this function must return a Buffer that can be
deserialized by the decode function
decode, a function that will be called to decode the object from
the passed Buffer
QMsgPack is a simple and powerful Delphi & C++ Builder implementation for messagepack protocol.
QMsgPack is a part of QDAC 3.0,Source code hosted in Sourceforge(http://sourceforge.net/p/qdac3).
Feathers
· Full types support,include messagepack extension type
· Full open source,free for used in ANY PURPOSE
· Quick and simple interface
· RTTI support include
Install
QMsgPack is not a desgin time package.So just place QMsgPack files into search path and add to your project.
// packing
MsgPackStream stream(&ba, QIODevice::WriteOnly);
stream << 1 << 2.3 << "some string";
// unpacking
MsgPackStream stream(ba);
int a;
double b;
QSting s;
stream >> a >> b >> s;
Qt types and User types
There is packers and unpackers for QColor, QTime, QDate, QDateTime, QPoint, QSize, QRect. Also you can create your own packer/unpacker methods for Qt or your own types. See docs for details.
Field names can be set in much the same way as the encoding/json package. For example:
typePersonstruct {
Namestring`msg:"name"`Addressstring`msg:"address"`Ageint`msg:"age"`Hiddenstring`msg:"-"`// this field is ignored
unexported bool// this field is also ignored
}
By default, the code generator will satisfy msgp.Sizer, msgp.Encodable, msgp.Decodable,
msgp.Marshaler, and msgp.Unmarshaler. Carefully-designed applications can use these methods to do
marshalling/unmarshalling with zero heap allocations.
While msgp.Marshaler and msgp.Unmarshaler are quite similar to the standard library's
json.Marshaler and json.Unmarshaler, msgp.Encodable and msgp.Decodable are useful for
stream serialization. (*msgp.Writer and *msgp.Reader are essentially protocol-aware versions
of *bufio.Writer and *bufio.Reader, respectively.)
Features
Extremely fast generated code
Test and benchmark generation
JSON interoperability (see msgp.CopyToJSON() and msgp.UnmarshalAsJSON())
Support for complex type declarations
Native support for Go's time.Time, complex64, and complex128 types
Generation of both []byte-oriented and io.Reader/io.Writer-oriented methods
As long as the declarations of MyInt and Data are in the same file as Struct, the parser will determine that the type information for MyInt and Data can be passed into the definition of Struct before its methods are generated.
Extensions
MessagePack supports defining your own types through "extensions," which are just a tuple of
the data "type" (int8) and the raw binary. You can see a worked example in the wiki.
Status
Mostly stable, in that no breaking changes have been made to the /msgp library in more than a year. Newer versions
of the code may generate different code than older versions for performance reasons. I (@philhofer) am aware of a
number of stability-critical commercial applications that use this code with good results. But, caveat emptor.
You can read more about how msgp maps MessagePack types onto Go types in the wiki.
Here some of the known limitations/restrictions:
Identifiers from outside the processed source file are assumed (optimistically) to satisfy the generator's interfaces. If this isn't the case, your code will fail to compile.
Like most serializers, chan and func fields are ignored, as well as non-exported fields.
Encoding of interface{} is limited to built-ins or types that have explicit encoding methods.
Maps must have string keys. This is intentional (as it preserves JSON interop.) Although non-string map keys are not forbidden by the MessagePack standard, many serializers impose this restriction. (It also means any well-formed struct can be de-serialized into a map[string]interface{}.) The only exception to this rule is that the deserializers will allow you to read map keys encoded as bin types, due to the fact that some legacy encodings permitted this. (However, those values will still be cast to Go strings, and they will be converted to str types when re-encoded. It is the responsibility of the user to ensure that map keys are UTF-8 safe in this case.) The same rules hold true for JSON translation.
If the output compiles, then there's a pretty good chance things are fine. (Plus, we generate tests for you.) Please, please, please file an issue if you think the generator is writing broken code.
As one might expect, the generated methods that deal with []byte are faster for small objects, but the io.Reader/Writer methods are generally more memory-efficient (and, at some point, faster) for large (> 2KB) objects.
msgpack-cli is command line tool that converts data from JSON to Msgpack and vice versa. Also allows calling RPC methods via msgpack-rpc.
Installation
% go get github.com/jakm/msgpack-cli
Debian packages and Windows binaries are available on project's
Releases page.
Usage
msgpack-cli
Usage:
msgpack-cli encode <input-file> [--out=<output-file>] [--disable-int64-conv]
msgpack-cli decode <input-file> [--out=<output-file>] [--pp]
msgpack-cli rpc <host> <port> <method> [<params>|--file=<input-file>] [--pp]
[--timeout=<timeout>][--disable-int64-conv]
msgpack-cli -h | --help
msgpack-cli --version
Commands:
encode Encode data from input file to STDOUT
decode Decode data from input file to STDOUT
rpc Call RPC method and write result to STDOUT
Options:
-h --help Show this help message and exit
--version Show version
--out=<output-file> Write output data to file instead of STDOUT
--file=<input-file> File where parameters or RPC method are read from
--pp Pretty-print - indent output JSON data
--timeout=<timeout> Timeout of RPC call [default: 30]
--disable-int64-conv Disable the default behaviour such that JSON numbers
are converted to float64 or int64 numbers by their
meaning, all result numbers will have float64 type
Arguments:
<input-file> File where data are read from
<host> Server hostname
<port> Server port
<method> Name of RPC method
<params> Parameters of RPC method in JSON format
txmsgpackrpc is a library for writing asynchronous
msgpack-rpc
servers and clients in Python, using Twisted
framework. Library is based on
txMsgpack, but some
improvements and fixes were made.
Features
user friendly API
modular object model
working timeouts and reconnecting
connection pool support
TCP, SSL, UDP and UNIX sockets
Python 3 note
To use UNIX sockets with Python 3 please use Twisted framework 15.3.0 and above.
Computation of PI with 5 places finished in 0.022390 seconds
Computation of PI with 100 places finished in 0.037856 seconds
Computation of PI with 1000 places finished in 0.038070 seconds
Computation of PI with 10000 places finished in 0.073907 seconds
Computation of PI with 100000 places finished in 6.741683 seconds
Computation of PI with 5 places finished in 0.001142 seconds
Computation of PI with 100 places finished in 0.001182 seconds
Computation of PI with 1000 places finished in 0.001206 seconds
Computation of PI with 10000 places finished in 0.001230 seconds
Computation of PI with 100000 places finished in 0.001255 seconds
Computation of PI with 1000000 places finished in 432.574457 seconds
Computation of PI with 1000000 places finished in 402.551226 seconds
DONE
Server
from__future__import print_function
from collections import defaultdict
from twisted.internet import defer, reactor, utils
from twisted.python import failure
from txmsgpackrpc.server import MsgpackRPCServer
pi_chudovsky_bs ='''"""Python3 program to calculate Pi using python long integers, binarysplitting and the Chudnovsky algorithmSee: http://www.craig-wood.com/nick/articles/pi-chudnovsky/ for moreinfoNick Craig-Wood <[email protected]>"""import mathfrom time import timedef sqrt(n, one): """ Return the square root of n as a fixed point number with the one passed in. It uses a second order Newton-Raphson convgence. This doubles the number of significant figures on each iteration. """ # Use floating point arithmetic to make an initial guess floating_point_precision = 10**16 n_float = float((n * floating_point_precision) // one) / floating_point_precision x = (int(floating_point_precision * math.sqrt(n_float)) * one) // floating_point_precision n_one = n * one while 1: x_old = x x = (x + n_one // x) // 2 if x == x_old: break return xdef pi_chudnovsky_bs(digits): """ Compute int(pi * 10**digits) This is done using Chudnovsky's series with binary splitting """ C = 640320 C3_OVER_24 = C**3 // 24 def bs(a, b): """ Computes the terms for binary splitting the Chudnovsky infinite series a(a) = +/- (13591409 + 545140134*a) p(a) = (6*a-5)*(2*a-1)*(6*a-1) b(a) = 1 q(a) = a*a*a*C3_OVER_24 returns P(a,b), Q(a,b) and T(a,b) """ if b - a == 1: # Directly compute P(a,a+1), Q(a,a+1) and T(a,a+1) if a == 0: Pab = Qab = 1 else: Pab = (6*a-5)*(2*a-1)*(6*a-1) Qab = a*a*a*C3_OVER_24 Tab = Pab * (13591409 + 545140134*a) # a(a) * p(a) if a & 1: Tab = -Tab else: # Recursively compute P(a,b), Q(a,b) and T(a,b) # m is the midpoint of a and b m = (a + b) // 2 # Recursively calculate P(a,m), Q(a,m) and T(a,m) Pam, Qam, Tam = bs(a, m) # Recursively calculate P(m,b), Q(m,b) and T(m,b) Pmb, Qmb, Tmb = bs(m, b) # Now combine Pab = Pam * Pmb Qab = Qam * Qmb Tab = Qmb * Tam + Pam * Tmb return Pab, Qab, Tab # how many terms to compute DIGITS_PER_TERM = math.log10(C3_OVER_24/6/2/6) N = int(digits/DIGITS_PER_TERM + 1) # Calclate P(0,N) and Q(0,N) P, Q, T = bs(0, N) one = 10**digits sqrtC = sqrt(10005*one, one) return (Q*426880*sqrtC) // Tif __name__ == "__main__": import sys digits = int(sys.argv[1]) pi = pi_chudnovsky_bs(digits) print(pi)'''defset_timeout(deferred, timeout=30):
defcallback(value):
ifnot watchdog.called:
watchdog.cancel()
return value
deferred.addBoth(callback)
watchdog = reactor.callLater(timeout, defer.timeout, deferred)
classComputePI(MsgpackRPCServer):
def__init__(self):
self.waiting = defaultdict(list)
self.results = {}
defremote_PI(self, digits, timeout=None):
if digits inself.results:
return defer.succeed(self.results[digits])
d = defer.Deferred()
if digits notinself.waiting:
subprocessDeferred =self.computePI(digits, timeout)
defcallWaiting(res):
waiting =self.waiting[digits]
delself.waiting[digits]
ifisinstance(res, failure.Failure):
func =lambdad: d.errback(res)
else:
func =lambdad: d.callback(res)
for d in waiting:
func(d)
subprocessDeferred.addBoth(callWaiting)
self.waiting[digits].append(d)
return d
defcomputePI(self, digits, timeout):
d = utils.getProcessOutputAndValue('/usr/bin/python', args=('-c', pi_chudovsky_bs, str(digits)))
defcallback((out, err, code)):
if code ==0:
pi =int(out)
self.results[digits] = pi
return pi
else:
return failure.Failure(RuntimeError('Computation failed: '+ err))
if timeout isnotNone:
set_timeout(d, timeout)
d.addCallback(callback)
return d
defmain():
server = ComputePI()
reactor.listenTCP(8000, server.getStreamFactory())
if__name__=='__main__':
reactor.callWhenRunning(main)
reactor.run()
Client
from__future__import print_function
import sys
import time
from twisted.internet import defer, reactor, task
from twisted.python import failure
@defer.inlineCallbacksdefmain():
try:
from txmsgpackrpc.client import connect
c =yield connect('localhost', 8000, waitTimeout=900)
defcallback(res, digits, start_time):
ifisinstance(res, failure.Failure):
print('Computation of PI with %d places failed: %s'%
(digits, res.getErrorMessage()), end='\n\n')
else:
print('Computation of PI with %d places finished in %f seconds'%
(digits, time.time() - start_time), end='\n\n')
sys.stdout.flush()
defers = []
for _ inrange(2):
for digits in (5, 100, 1000, 10000, 100000, 1000000):
d = c.createRequest('PI', digits, 600)
d.addBoth(callback, digits, time.time())
defers.append(d)
# wait for 30 secondsyield task.deferLater(reactor, 30, lambda: None)
yield defer.DeferredList(defers)
print('DONE')
exceptException:
import traceback
traceback.print_exc()
finally:
reactor.stop()
if__name__=='__main__':
reactor.callWhenRunning(main)
reactor.run()
Multicast UDP example
Example servers join to group 224.0.0.5 and listen on port 8000. Their only
method echo returns its parameter.
Client joins group to 224.0.0.5, sends multicast request to group on port 8000
and waits for 5 seconds for responses. If some responses are received,
protocol callbacks with tuple of results and individual parts are checked for
errors. If no responses are received, protocol errbacks with TimeoutError.
Because there is no common way to determine number of peers in group,
MsgpackMulticastDatagramProtocol always wait for responses until waitTimeout
expires.
Since J has no native Dictionary / Hashmap type, one has been implemented for the purposes of MsgPack serialization.
Construction:
`HM =: '' conew 'HashMap'`
This will instantiate a new HashMap object.
`set__HM 'key';'value'`
This will add a key value pair to the dicitonary. Note the length of the boxed array argument must be two. i.e. if the value is an array itself, then it must be boxed together before appending to the key value.
`get__HM 'key'`
This will return the value for the given key, if one exists.
To pack a HashMap:
`packObj s: HM`
Here HM is the HashMap reference name. It must be symbolized first, before packing. Furthermore, to add a HashMap as a value of another HashMap:
`set__HM 'hashmapkey';s:HM2`
The inner HashMap reference (HM2) must be symbolized before adding to the dictionary. If you are adding a list of HashMaps to the parent HashMap:
`set__HM 'key'; <(s:HM2;s:HM3;s:HM4)`
Note the HashMap array is boxed so that the argument for set is of length two. Since the HashMap HM stores the reference to the child HashMaps as symbols, they must be desymbolized if retrieved. e.g.
msgpack-nim currently provides only the basic functionality.
Please see what's listed in Todo section. Compared to other language bindings, it's well-tested by
1000 auto-generated test cases by Haskell QuickCheck, which always runs
on every commit to Github repository. Please try make quickcheck on your local machine
to see what happens (It will take a bit while. Be patient). Have a nice packing!
Install
$ nimble update
$ nimble install msgpack
Example
import msgpack
import streams
# You can use any stream subclasses to serialize/deserialize# messages. e.g. FileStreamlet st: Stream = newStringStream()
assert(st.getPosition == 0)
# Type checking protects you from making trivial mistakes.# Now we pack {"a":[5,-3], "b":[1,2,3]} but more complex# combination of any Msg types is allowed.## In xs we can mix specific conversion (PFixNum) and generic# conversion (unwrap).let xs: Msg = wrap(@[PFixNum(5), (-3).wrap])
let ys: Msg = wrap(@[("a".wrap, xs.wrap), ("b".wrap, @[1, 2, 3].wrap)])
st.pack(ys.wrap) # Serialize!# We need to reset the cursor to the beginning of the target# byte sequence.
st.setPosition(0)
let msg = st.unpack # Deserialize!# output:# a# 5# -3# b# 1# 2# 3for e in msg.unwrapMap:
echo e.key.unwrapStr
for e in e.val.unwrapArray:
echo e.unwrapInt
Todo
Implement unwrapInto to convert Msg object to Nim object handily
Evaluate performance and scalability
Talk with offical Ruby implementation
Don't repeat yourself: The code now has too much duplications. Using templates?
The core of MPack contains a buffered reader and writer, and a tree-style parser that decodes into a tree of dynamically typed nodes. Helper functions can be enabled to read values of expected type, to work with files, to allocate strings automatically, to check UTF-8 encoding, and more.
The MPack code is small enough to be embedded directly into your codebase. Simply download the amalgamation package and add mpack.h and mpack.c to your project.
The MPack featureset can be customized at compile-time to set which features, components and debug checks are compiled, and what dependencies are available.
The Node API parses a chunk of MessagePack data into an immutable tree of dynamically-typed nodes. A series of helper functions can be used to extract data of specific types from each node.
// parse a file into a node treempack_tree_t tree;
mpack_tree_init_filename(&tree, "homepage-example.mp", 0);
mpack_tree_parse(&tree);
mpack_node_t root = mpack_tree_root(&tree);
// extract the example data on the msgpack homepagebool compact = mpack_node_bool(mpack_node_map_cstr(root, "compact"));
int schema = mpack_node_i32(mpack_node_map_cstr(root, "schema"));
// clean up and check for errorsif (mpack_tree_destroy(&tree) != mpack_ok) {
fprintf(stderr, "An error occurred decoding the data!\n");
return;
}
Note that no additional error handling is needed in the above code. If the file is missing or corrupt, if map keys are missing or if nodes are not in the expected types, special "nil" nodes and false/zero values are returned and the tree is placed in an error state. An error check is only needed before using the data.
The above example decodes into allocated pages of nodes. A fixed node pool can be provided to the parser instead in memory-constrained environments. For maximum performance and minimal memory usage, the Expect API can be used to parse data of a predefined schema.
The Write API
The Write API encodes structured data to MessagePack.
// encode to memory bufferchar* data;
size_t size;
mpack_writer_t writer;
mpack_writer_init_growable(&writer, &data, &size);
// write the example on the msgpack homepagempack_start_map(&writer, 2);
mpack_write_cstr(&writer, "compact");
mpack_write_bool(&writer, true);
mpack_write_cstr(&writer, "schema");
mpack_write_uint(&writer, 0);
mpack_finish_map(&writer);
// finish writingif (mpack_writer_destroy(&writer) != mpack_ok) {
fprintf(stderr, "An error occurred encoding the data!\n");
return;
}
// use the datado_something_with_data(data, size);
free(data);
In the above example, we encode to a growable memory buffer. The writer can instead write to a pre-allocated or stack-allocated buffer, avoiding the need for memory allocation. The writer can also be provided with a flush function (such as a file or socket write function) to call when the buffer is full or when writing is done.
If any error occurs, the writer is placed in an error state. The writer will flag an error if too much data is written, if the wrong number of elements are written, if the data could not be flushed, etc. No additional error handling is needed in the above code; any subsequent writes are ignored when the writer is in an error state, so you don't need to check every write for errors.
Note in particular that in debug mode, the mpack_finish_map() call above ensures that two key/value pairs were actually written as claimed, something that other MessagePack C/C++ libraries may not do.
Comparison With Other Parsers
MPack is rich in features while maintaining very high performance and a small code footprint. Here's a short feature table comparing it to other C parsers:
A larger feature comparison table is available here which includes descriptions of the various entries in the table.
This benchmarking suite compares the performance of MPack to other implementations of schemaless serialization formats. MPack outperforms all JSON and MessagePack libraries, and in some tests MPack is several times faster than RapidJSON for equivalent data.
Why Not Just Use JSON?
Conceptually, MessagePack stores data similarly to JSON: they are both composed of simple values such as numbers and strings, stored hierarchically in maps and arrays. So why not just use JSON instead? The main reason is that JSON is designed to be human-readable, so it is not as efficient as a binary serialization format:
Compound types such as strings, maps and arrays are delimited, so appropriate storage cannot be allocated upfront. The whole object must be parsed to determine its size.
Strings are not stored in their native encoding. Special characters such as quotes and backslashes must be escaped when written and converted back when read.
Numbers are particularly inefficient (especially when parsing back floats), making JSON inappropriate as a base format for structured data that contains lots of numbers.
Binary data is not supported by JSON at all. Small binary blobs such as icons and thumbnails need to be Base64 encoded or passed out-of-band.
The above issues greatly increase the complexity of the decoder. Full-featured JSON decoders are quite large, and minimal decoders tend to leave out such features as string unescaping and float parsing, instead leaving these up to the user or platform. This can lead to hard-to-find platform-specific and locale-specific bugs, as well as a greater potential for security vulnerabilites. This also significantly decreases performance, making JSON unattractive for use in applications such as mobile games.
While the space inefficiencies of JSON can be partially mitigated through minification and compression, the performance inefficiencies cannot. More importantly, if you are minifying and compressing the data, then why use a human-readable format in the first place?
Running the Unit Tests
The MPack build process does not build MPack into a library; it is used to build and run the unit tests. You do not need to build MPack or the unit testing suite to use MPack.
On Linux, the test suite uses SCons and requires Valgrind, and can be run in the repository or in the amalgamation package. Run scons to build and run the test suite in full debug configuration.
On Windows, there is a Visual Studio solution, and on OS X, there is an Xcode project for building and running the test suite.
You can also build and run the test suite in all supported configurations, which is what the continuous integration server will build and run. If you are on 64-bit, you will need support for cross-compiling to 32-bit, and running 32-bit binaries with 64-bit Valgrind. On Ubuntu, you'll need libc6-dbg:i386. On Arch you'll need gcc-multilib or lib32-clang, and valgrind-multilib. Use scons all=1 -j16 (or some appropriate thread count) to build and run all tests.
RMP is designed to be lightweight and straightforward. There are low-level API, which gives you
full control on data encoding/decoding process and makes no heap allocations. On the other hand
there are high-level API, which provides you convenient interface using Rust standard library and
compiler reflection, allowing to encode/decode structures using derive attribute.
Zero-copy value decoding
RMP allows to decode bytes from a buffer in a zero-copy manner easily and blazingly fast, while Rust
static checks guarantees that the data will be valid as long as the buffer lives.
Clear error handling
RMP's error system guarantees that you never receive an error enum with unreachable variant.
Robust and tested
This project is developed using TDD and CI, so any found bugs will be fixed without breaking
existing functionality.
Requirements
Rust 1.16
Versioning
This project adheres to Semantic Versioning. However until 1.0.0 comes there
will be the following rules:
Any API/ABI breaking changes will be notified in the changelog explicitly and results in minor
version bumping.
API extending features results in patch version bumping.
Non-breaking bug fixes and performance improving results in patch version bumping.
I am fully aware of another msgpack implementation written in nim. But I want something easier to use. Another motivation come from the nim language itself. The current version of nim compiler offer many improvements, including 'generics ' specialization. I found out nim compiler is smart enough to make serialization/deserialization to/from msgpack easy and convenient.
requirement: nim ver 0.11.2 or later
Example
import msgpack4nim, streams
type#lets try with a rather complex objectCustomType = object
count: int
content: seq[int]
name: string
ratio: float
attr: array[0..5, int]
ok: boolprocinitCustomType():CustomType=
result.count = -1
result.content = @[1,2,3]
result.name = "custom"
result.ratio = 1.0for i in0..5: result.attr[i] = i
result.ok = falsevar x = initCustomType()
#you can use another stream compatible#class here e.g. FileStreamvar s = newStringStream()
s.pack(x) #here the magic happened
s.setPosition(0)
var xx: CustomType
s.unpack(xx) #and here tooassert xx == x
echo"OK ", xx.name
see? you only need to call 'pack' and 'unpack', and the compiler do the hard work for you. Very easy, convenient, and works well
if you think setting up a StringStream too much for you, you can simply call pack(yourobject) and it will return a string containing msgpack data.
var a = @[1,2,3,4,5,6,7,8,9,0]
var buf = pack(a)
var aa: seq[int]
unpack(buf, aa)
assert a == aa
in case the compiler cannot decide how to serialize or deserialize your very very complex object, you can help it in easy way
by defining your own handler pack_type/unpack_type
type#not really complex, just for example
mycomplexobject = object
a: someSimpleType
b: someSimpleType
#help the compiler to decideprocpack_type*(s: Stream, x: mycomplexobject) =
s.pack(x.a) # let the compiler decide
s.pack(x.b) # let the compiler decide#help the compiler to decideprocunpack_type*(s: Stream, x: var complexobject) =
s.unpack(x.a)
s.unpack(x.b)
var s: newStringStream()
var x: mycomplexobject
s.pack(x) #pack as usual
s.setPosition(0)
s.unpack(x) #unpack as usual
object and tuple by default converted to msgpack array, however
you can tell the compiler to convert it to map by supplying --define:msgpack_obj_to_map
nim c --define:msgpack_obj_to_map yourfile.nim
or --define:msgpack_obj_to_stream to convert object/tuple fields value into stream of msgpack objects
nim c --define:msgpack_obj_to_stream yourfile.nim
What this means? It means by default, each object/tuple will be converted to one msgpack array contains
field(s) value only without their field(s) name.
If you specify that the object/tuple will be converted to msgpack map, then each object/tuple will be
converted to one msgpack map contains key-value pairs. The key will be field name, and the value will be field value.
If you specify that the object/tuple will be converted to msgpack stream, then each object/tuple will be converted
into one or more msgpack's type for each object's field and then the resulted stream will be concatenated
to the msgpack stream buffer.
Which one should I use?
Usually, other msgpack libraries out there convert object/tuple/record/struct or whatever structured data supported by
the language into msgpack array, but always make sure to consult the documentation first.
If both of the serializer and deserializer agreed to one convention, then usually there will be no problem.
No matter which library/language you use, you can exchange msgpack data among them.
ref-types:
ref something :
if ref value is nil, it will be packed into msgpack nil, and when unpacked, you will get nil too
if ref value not nil, it will be dereferenced e.g. pack(val[]) or unpack(val[])
ref subject to some restriction. see restriction below
ptr will be treated like ref during pack
unpacking ptr will invoke alloc, so you must dealloc it
circular reference:
altough detecting circular reference is not too difficult(using set of pointers), the current implementation does not provide circular reference detection. If you pack something contains circular reference, you know something bad will happened
Restriction:
For objects their type is not serialized. This means essentially that it does not work if the object has some other runtime type than its compiletime type:
import streams, msgpack4nim
typeTA = objectofRootObjTB = objectofTA
f: intvar
a: refTA
b: refTBnew(b)
a = b
echostringify(pack(a))
#produces "[ ]" or "{ }"#not "[ 0 ]" or '{ "f" : 0 }'
limitation:
these types will be ignored:
procedural type
cstring(it is not safe to assume it always terminated by null)
pointer
these types cannot be automatically pack/unpacked:
void (will cause compile time error)
however, you can provide your own handler for cstring and pointer
Gotchas:
because data conversion did not preserve original data types, the following code is perfectly valid and will raise no exception
import msgpack4nim, streams, tables, sets, strtabs
typeHorse = object
legs: int
foals: seq[string]
attr: Table[string, string]
Cat = object
legs: uint8
kittens: HashSet[string]
traits: StringTableRefprocinitHorse():Horse=
result.legs = 4
result.foals = @["jilly", "colt"]
result.attr = initTable[string, string]()
result.attr["color"] ="black"
result.attr["speed"] ="120mph"var stallion = initHorse()
var tom: Catvar buf = pack(stallion) #pack a Horse hereunpack(buf, tom)
#abracadabra, it will unpack into a Catecho"legs: ", $tom.legs
echo"kittens: ", $tom.kittens
echo"traits: ", $tom.traits
another gotcha:
typeKAB = objectofRootObj
aaa: int
bbb: intKCD = objectofKAB
ccc: int
ddd: intKEF = objectofKCD
eee: int
fff: intvar kk = KEF()
echostringify(pack(kk))
# will produce "{ "eee" : 0, "fff" : 0, "ccc" : 0, "ddd" : 0, "aaa" : 0, "bbb" : 0 }"# not "{ "aaa" : 0, "bbb" : 0, "ccc" : 0, "ddd" : 0, "eee" : 0, "fff" : 0 }"
bin and ext format
this implementation provide function to encode/decode msgpack bin/ext format header, but for the body, you must write it yourself to the StringStream
import streams, msgpack4nim
const exttype0 = 0var s = newStringStream()
var body = "this is the body"
s.pack_ext(body.len, exttype0)
s.write(body)
#the same goes to bin format
s.pack_bin(body.len)
s.write(body)
s.setPosition(0)
#unpack_ext return tuple[exttype:uint8, len: int]let (extype, extlen) = s.unpack_ext()
var extbody = s.readStr(extlen)
assert extbody == body
let binlen = s.unpack_bin()
var binbody = s.readStr(binlen)
assert binbody == body
stringify
you can convert msgpack data to readable string using stringify function
typeHorse = object
legs: int
speed: int
color: string
name: stringvar cc = Horse(legs:4, speed:150, color:"black", name:"stallion")
var zz = pack(cc)
echostringify(zz)
toAny takes a string of msgpack data or a stream, then it will produce msgAny which you can interrogate of it's type and value during runtime by accessing it's member msgType
toAny recognize all valid msgpack message and translate it into a group of types:
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
mruby-simplemsgpack searches for msgpack-c on your system, if it can find it it links against it, there is also a bundled version of msgpack-c included if you don't have it installed in your system.
You need at least msgpack-c 1.
Example
Objects can be packed with Object#to_msgpack or MessagePack.pack:
A string with multiple packed values can be unpacked by handing a block to
MessagePack.unpack:
packed = packed_string + packed_hash
unpacked = []
MessagePack.unpack(packed) do |result|
unpacked << result
end
unpacked # => ['bye', { a: 'hash', with: [1, 'embedded', 'array'] }]
When using MessagePack.unpack with a block and passing it a incomplete packed Message
it returns the number of bytes it was able to unpack, if it was able to unpack the howl Message it returns self.
This is helpful if the given data contains an incomplete
last object and we want to continue unpacking after we have more data.
packed = packed_string + packed_hash.slice(0, packed_hash.length/2)
unpacked = []
unpacked_length =MessagePack.unpack(packed) do |result|
unpacked << result
end
unpacked_length # => 4 (length of packed_string)
unpacked # => ['bye']
Extension Types
To customize how objects are packed, define an extension type.
By default, MessagePack packs symbols as strings and does not convert them
back when unpacking them. Symbols can be preserved by registering an extension
type for them:
For nil, true, false, Fixnum, Float, String, Array and Hash a registered
ext type is ignored. They are always packed according to the MessagePack
specification.
Proc, blocks or lambas
If you want to pack and unpack mruby blocks take a look at the mruby-proc-irep-ext gem, it can be registered like the other extension types
Overriding to_msgpack
It's not supported to override to_msgpack, MessagePack.pack ignores it, same when that object is included in a Hash or Array.
This gem treats objects like ruby does, if you want to change the way your custom Class gets handled you can add to_hash, to_ary, to_int or to_str methods so it will be packed like a Hash, Array, Fixnum or String (in that order) then.
Pure JavaScript only (No node-gyp nor gcc required)
Faster than any other pure JavaScript libraries on node.js v4
Even faster than node-gyp C++ based msgpack library (90% faster on encoding)
Streaming encoding and decoding interface is also available. It's more faster.
Ready for Web browsers including Chrome, Firefox, Safari and even IE8
Tested on Node.js v0.10, v0.12, v4, v5 and v6 as well as Web browsers
Encoding and Decoding MessagePack
var msgpack =require("msgpack-lite");
// encode from JS Object to MessagePack (Buffer)var buffer =msgpack.encode({"foo":"bar"});
// decode from MessagePack (Buffer) to JS Objectvar data =msgpack.decode(buffer); // => {"foo": "bar"}// if encode/decode receives an invalid argument an error is thrown
Writing to MessagePack Stream
var fs =require("fs");
var msgpack =require("msgpack-lite");
var writeStream =fs.createWriteStream("test.msp");
var encodeStream =msgpack.createEncodeStream();
encodeStream.pipe(writeStream);
// send multiple objects to streamencodeStream.write({foo:"bar"});
encodeStream.write({baz:"qux"});
// call this once you're done writing to the stream.encodeStream.end();
Reading from MessagePack Stream
var fs =require("fs");
var msgpack =require("msgpack-lite");
var readStream =fs.createReadStream("test.msp");
var decodeStream =msgpack.createDecodeStream();
// show multiple objects decoded from streamreadStream.pipe(decodeStream).on("data", console.warn);
Decoding MessagePack Bytes Array
var msgpack =require("msgpack-lite");
// decode() accepts Buffer instance per defaultmsgpack.decode(Buffer([0x81, 0xA3, 0x66, 0x6F, 0x6F, 0xA3, 0x62, 0x61, 0x72]));
// decode() also accepts Array instancemsgpack.decode([0x81, 0xA3, 0x66, 0x6F, 0x6F, 0xA3, 0x62, 0x61, 0x72]);
// decode() accepts raw Uint8Array instance as wellmsgpack.decode(newUint8Array([0x81, 0xA3, 0x66, 0x6F, 0x6F, 0xA3, 0x62, 0x61, 0x72]));
Command Line Interface
A CLI tool bin/msgpack converts data stream from JSON to MessagePack and vice versa.
$ make test-browser-local
open the following url in a browser:
http://localhost:4000/__zuul
Browser Build
Browser version msgpack.min.js is also available. 50KB minified, 14KB gziped.
<!--[if lte IE 9]><script src="https://cdnjs.cloudflare.com/ajax/libs/es5-shim/4.1.10/es5-shim.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/json3/3.3.2/json3.min.js"></script><![endif]-->
<scriptsrc="https://rawgit.com/kawanet/msgpack-lite/master/dist/msgpack.min.js"></script>
<script>
// encode from JS Object to MessagePack (Uint8Array)var buffer =msgpack.encode({foo:"bar"});// decode from MessagePack (Uint8Array) to JS Objectvar array =newUint8Array([0x81, 0xA3, 0x66, 0x6F, 0x6F, 0xA3, 0x62, 0x61, 0x72]);var data =msgpack.decode(array);
</script>
MessagePack With Browserify
Step #1: write some code at first.
var msgpack =require("msgpack-lite");
var buffer =msgpack.encode({"foo":"bar"});
var data =msgpack.decode(buffer);
console.warn(data); // => {"foo": "bar"}
Proceed to the next steps if you prefer faster browserify compilation time.
Step #2: add browser property on package.json in your project. This refers the global msgpack object instead of including whole of msgpack-lite source code.
A benchmark tool lib/benchmark.js is available to compare encoding/decoding speed
(operation per second) with other MessagePack modules.
It counts operations of 1KB JSON document in 10 seconds.
Streaming benchmark tool lib/benchmark-stream.js is also available.
It counts milliseconds for 1,000,000 operations of 30 bytes fluentd msgpack fragment.
This shows streaming encoding and decoding are super faster.
$ npm run benchmark-stream 2
operation (1000000 x 2)
op
ms
op/s
stream.write(msgpack.encode(obj));
1000000
3027
330360
stream.write(notepack.encode(obj));
1000000
2012
497017
msgpack.Encoder().on("data",ondata).encode(obj);
1000000
2956
338294
msgpack.createEncodeStream().write(obj);
1000000
1888
529661
stream.write(msgpack.decode(buf));
1000000
2020
495049
stream.write(notepack.decode(buf));
1000000
1794
557413
msgpack.Decoder().on("data",ondata).decode(buf);
1000000
2744
364431
msgpack.createDecodeStream().write(buf);
1000000
1341
745712
Test environment: msgpack-lite 0.1.14, Node v4.2.3, Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz
MessagePack Mapping Table
The following table shows how JavaScript objects (value) will be mapped to
MessagePack formats
and vice versa.
Source Value
MessagePack Format
Value Decoded
null, undefined
nil format family
null
Boolean (true, false)
bool format family
Boolean (true, false)
Number (32bit int)
int format family
Number (int or double)
Number (64bit double)
float format family
Number (double)
String
str format family
String
Buffer
bin format family
Buffer
Array
array format family
Array
Map
map format family
Map (if usemap=true)
Object (plain object)
map format family
Object (or Map if usemap=true)
Object (see below)
ext format family
Object (see below)
Note that both null and undefined are mapped to nil 0xC1 type.
This means undefined value will be upgraded to null in other words.
Extension Types
The MessagePack specification allows 128 application-specific extension types.
The library uses the following types to make round-trip conversion possible
for JavaScript native objects.
Type
Object
Type
Object
0x00
0x10
0x01
EvalError
0x11
Int8Array
0x02
RangeError
0x12
Uint8Array
0x03
ReferenceError
0x13
Int16Array
0x04
SyntaxError
0x14
Uint16Array
0x05
TypeError
0x15
Int32Array
0x06
URIError
0x16
Uint32Array
0x07
0x17
Float32Array
0x08
0x18
Float64Array
0x09
0x19
Uint8ClampedArray
0x0A
RegExp
0x1A
ArrayBuffer
0x0B
Boolean
0x1B
Buffer
0x0C
String
0x1C
0x0D
Date
0x1D
DataView
0x0E
Error
0x1E
0x0F
Number
0x1F
Other extension types are mapped to built-in ExtBuffer object.
Custom Extension Types (Codecs)
Register a custom extension type number to serialize/deserialize your own class instances.
var msgpack =require("msgpack-lite");
var codec =msgpack.createCodec();
codec.addExtPacker(0x3F, MyVector, myVectorPacker);
codec.addExtUnpacker(0x3F, myVectorUnpacker);
var data =newMyVector(1, 2);
var encoded =msgpack.encode(data, {codec: codec});
var decoded =msgpack.decode(encoded, {codec: codec});
functionMyVector(x, y) {
this.x= x;
this.y= y;
}
functionmyVectorPacker(vector) {
var array = [vector.x, vector.y];
returnmsgpack.encode(array); // return Buffer serialized
}
functionmyVectorUnpacker(buffer) {
var array =msgpack.decode(buffer);
returnnewMyVector(array[0], array[1]); // return Object deserialized
}
The first argument of addExtPacker and addExtUnpacker should be an integer within the range of 0 and 127 (0x0 and 0x7F). myClassPacker is a function that accepts an instance of MyClass, and should return a buffer representing that instance. myClassUnpacker is the opposite: it accepts a buffer and should return an instance of MyClass.
If you pass an array of functions to addExtPacker or addExtUnpacker, the value to be encoded/decoded will pass through each one in order. This allows you to do things like this:
You can also pass the codec option to msgpack.Decoder(options), msgpack.Encoder(options), msgpack.createEncodeStream(options), and msgpack.createDecodeStream(options).
If you wish to modify the default built-in codec, you can access it at msgpack.codec.preset.
Custom Codec Options
msgpack.createCodec() function accepts some options.
It does NOT have the preset extension types defined when no options given.
var codec =msgpack.createCodec();
preset: It has the preset extension types described above.
var codec =msgpack.createCodec({preset:true});
safe: It runs a validation of the value before writing it into buffer. This is the default behavior for some old browsers which do not support ArrayBuffer object.
var codec =msgpack.createCodec({safe:true});
useraw: It uses raw formats instead of bin and str.
var codec =msgpack.createCodec({useraw:true});
int64: It decodes msgpack's int64/uint64 formats with int64-buffer object.
var codec =msgpack.createCodec({int64:true});
binarraybuffer: It ties msgpack's bin format with ArrayBuffer object, instead of Buffer object.
var codec =msgpack.createCodec({binarraybuffer:true, preset:true});
uint8array: It returns Uint8Array object when encoding, instead of Buffer object.
var codec =msgpack.createCodec({uint8array:true});
usemap: Uses the global JavaScript Map type, if available, to unpack
MessagePack map elements.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
msgpack-tools contains simple command-line utilities for converting from MessagePack to JSON and vice-versa. They support options for lax parsing, lossy conversions, pretty-printing, and base64 encoding.
msgpack2json -- Convert MessagePack to JSON
json2msgpack -- Convert JSON to MessagePack
They can be used for dumping MessagePack from a file or web API to a human-readable format, or for converting hand-written or generated JSON to MessagePack. The lax parsing mode supports comments and trailing commas in JSON, making it possible to hand-write your app or game data in JSON and convert it at build-time to MessagePack.
Mac OS X (Homebrew): brew install https://ludocode.github.io/msgpack-tools.rb
Debian (Ubuntu, etc.): .deb package for x86_64 in the latest release; install with dpkg
For other platforms, msgpack-tools must be built from source. Download the msgpack-tools tarball from the latest release page (not the "source code" archive generated by GitHub, but the actual release package.)
msgpack-tools uses CMake. A configure wrapper is provided that calls CMake, so you can simply run the usual:
./configure && make && sudo make install
If you are building from the repository, you will need md2man to generate the man pages.
Differences between MessagePack and JSON
MessagePack is intended to be very close to JSON in supported features, so they can usually be transparently converted from one to the other. There are some differences, however, which can complicate conversions.
These are the differences in what objects are representable in each format:
JSON keys must be strings. MessagePack keys can be any type, including maps and arrays.
JSON supports "bignums", i.e. integers of any size. MessagePack integers must fit within a 64-bit signed or unsigned integer.
JSON real numbers are specified in decimal scientific notation and can have arbitrary precision. MessagePack real numbers are in IEEE 754 standard 32-bit or 64-bit binary.
MessagePack supports binary and extension type objects. JSON does not support binary data. Binary data is often encoded into a base64 string to be embedded into a JSON document.
A JSON document can be encoded in UTF-8, UTF-16 or UTF-32, and the entire document must be in the same encoding. MessagePack strings are required to be UTF-8, although this is not enforced by many encoding/decoding libraries.
By default, msgpack2json and json2msgpack convert in strict mode. If an object in the source format is not representable in the destination format, the converter aborts with an error. A lax mode is available which performs a "lossy" conversion, and base64 conversion modes are available to support binary data in JSON.
In the examples above, the method pack automatically packs a value depending on its type. But not all PHP types
can be uniquely translated to MessagePack types. For example, the MessagePack format defines map and array types,
which are represented by a single array type in PHP. By default, the packer will pack a PHP array as a MessagePack
array if it has sequential numeric keys, starting from 0 and as a MessagePack map otherwise:
The Packer object supports a number of bitmask-based options for fine-tuning the packing
process (defaults are in bold):
Name
Description
FORCE_STR
Forces PHP strings to be packed as MessagePack UTF-8 strings
FORCE_BIN
Forces PHP strings to be packed as MessagePack binary data
DETECT_STR_BIN
Detects MessagePack str/bin type automatically
FORCE_ARR
Forces PHP arrays to be packed as MessagePack arrays
FORCE_MAP
Forces PHP arrays to be packed as MessagePack maps
DETECT_ARR_MAP
Detects MessagePack array/map type automatically
FORCE_FLOAT32
Forces PHP floats to be packed as 32-bits MessagePack floats
FORCE_FLOAT64
Forces PHP floats to be packed as 64-bits MessagePack floats
The type detection mode (DETECT_STR_BIN/DETECT_ARR_MAP) adds some overhead which can be noticed when you pack
large (16- and 32-bit) arrays or strings. However, if you know the value type in advance (for example, you only
work with UTF-8 strings or/and associative arrays), you can eliminate this overhead by forcing the packer to use
the appropriate type, which will save it from running the auto-detection routine. Another option is to explicitly
specify the value type. The library provides 2 auxiliary classes for this, Map and Binary.
Check the "Type transformers" section below for details.
Examples:
useMessagePack\Packer;useMessagePack\PackOptions;// pack PHP strings to MP strings, PHP arrays to MP maps // and PHP 64-bit floats (doubles) to MP 32-bit floats$packer=newPacker(PackOptions::FORCE_STR|PackOptions::FORCE_MAP|PackOptions::FORCE_FLOAT32);// pack PHP strings to MP binaries and PHP arrays to MP arrays$packer=newPacker(PackOptions::FORCE_BIN|PackOptions::FORCE_ARR);// these will throw MessagePack\Exception\InvalidOptionException$packer=newPacker(PackOptions::FORCE_STR|PackOptions::FORCE_BIN);$packer=newPacker(PackOptions::FORCE_FLOAT32|PackOptions::FORCE_FLOAT64);
Unpacking
To unpack data you can either use an instance of a BufferUnpacker:
If the packed data is received in chunks (e.g. when reading from a stream), use the tryUnpack method, which attempts
to unpack data and returns an array of unpacked messages (if any) instead of throwing an InsufficientDataException:
while ($chunk=...) {$unpacker->append($chunk);if ($messages=$unpacker->tryUnpack()) {return$messages; }}
If you want to unpack from a specific position in a buffer, use seek():
$unpacker->seek(42); // set position equal to 42 bytes$unpacker->seek(-8); // set position to 8 bytes before the end of the buffer
To skip bytes from the current position, use skip():
$unpacker->skip(10); // set position to 10 bytes ahead of the current position
Besides the above methods BufferUnpacker provides type-specific unpacking methods, namely:
The BufferUnpacker object supports a number of bitmask-based options for fine-tuning the unpacking process (defaults
are in bold):
Name
Description
BIGINT_AS_EXCEPTION
Throws an exception on integer overflow [1]
BIGINT_AS_GMP
Converts overflowed integers to GMP objects [2]
BIGINT_AS_STR
Converts overflowed integers to strings
1. The binary MessagePack format has unsigned 64-bit as its largest integer data type,
but PHP does not support such integers, which means that an overflow can occur during unpacking.
In addition to the basic types,
the library provides functionality to serialize and deserialize arbitrary types. In order to support a custom
type you need to create and register a transformer. The transformer should implement either or both the Packable
and/or the Unpackable interface.
The purpose of Packable transformers is to serialize a specific value to one of the basic MessagePack types. A good
example of such a transformer is a MapTransformer that comes with the library. It serializes Map objects (which
are simple wrappers around PHP arrays) to MessagePack maps. This is useful when you want to explicitly mark that
a given PHP array must be packed as a MessagePack map, without triggering the type's auto-detection routine.
Transformers implementing the Unpackable interface are intended for unpacking
extension types.
For example, the code below shows how to create a transformer that allows you to work transparently with DateTime
objects:
More type transformer examples can be found in the examples directory.
Exceptions
If an error occurs during packing/unpacking, a PackingFailedException or UnpackingFailedException will be thrown,
respectively.
In addition, there are two more exceptions that can be thrown during unpacking:
InsufficientDataException
IntegerOverflowException
An InvalidOptionException will be thrown in case an invalid option (or a combination of mutually exclusive options)
is used.
Tests
Run tests as follows:
vendor/bin/phpunit
Also, if you already have Docker installed, you can run the tests in a docker container.
First, create a container:
./dockerfile.sh | docker build -t msgpack -
The command above will create a container named msgpack with PHP 7.2 runtime.
You may change the default runtime by defining the PHP_RUNTIME environment variable:
export MP_BENCH_TARGETS=pure_p
export MP_BENCH_ITERATIONS=1000000
export MP_BENCH_ROUNDS=5
# a comma separated list of test namesexport MP_BENCH_TESTS='complex array, complex map'# or a group name# export MP_BENCH_TESTS='[email protected]' // @pecl_comp# or a regexp# export MP_BENCH_TESTS='/complex (array|map)/'
php -n -dpcre.jit=1 -dzend_extension=opcache.so -dopcache.enable_cli=1 tests/bench.php
YSMessagePack is a messagePack packer/unpacker written in swift (swift 3 ready). It is designed to be easy to use. YSMessagePack include following features:
Pack custom structs and classes / unpack objects by groups and apply handler to each group (easier to re-construct your struct$)
Asynchronous unpacking
Pack and unpack multiple message-packed data regardless of types with only one line of code
Specify how many items to unpack
Get remaining bytes that were not message-packed ; start packing from some index -- so you can mix messagepack with other protocol!!!
Helper methods to cast NSData to desired types
Operator +^ and +^= to join NSData
Version
1.6.2 (Dropped swift 2 support, swift 3 support only from now on)
Installation
Simply add files under YSMessagePack/Classes to your project,
use cocoapod, add "pod 'YSMessagePack', '~> 1.6.2' to your podfile
Usage
Pack:
let exampleInt:Int=1let exampleStr:String="Hello World"let exampleArray: [Int] = [1, 2, 3, 4, 5, 6]
let bool:Bool=true// To pack items, just put all of them in a single array// and call the `pack(items:)` function//this will be the packed datalet msgPackedBytes: NSData =pack(items: [true, foo, exampleInt, exampleStr, exampleArray])
// Now your payload is ready to send!!!
But what if we have some custom data structure to send?
//To make your struct / class packablestructMyStruct: Packable { //Confirm to this protocolvar name:Stringvar index:IntfuncpackFormat() -> [Packable] { //protocol functionreturn [name, index] //pack order
}
funcmsgtype() -> MsgPackTypes {
return .Custom
}
}
let exampleInt:Int=1let exampleStr:String="Hello World"let exampleArray: [Int] = [1, 2, 3, 4, 5]
let bool:Bool=truelet foo =MyStruct(name: "foo", index: 626)
let msgPackedBytes =pack(items: [bool, foo, exampleInt, exampleStr, exampleArray])
Or you can pack them individually and add them to a byte array manually (Which is also less expensive)
let exampleInt:Int=1let exampleStr:String="Hello World"let exampleArray: [Int] = [1, 2, 3, 4, 5, 6]
//Now pack them individuallylet packedInt = exampleInt.packed()
//if you didn't specific encoding, the default encoding will be ASCII
#ifswift(>=3)
let packedStr = exampleStr.packed(withEncoding: NSASCIIStringEncoding)
#elselet packedStr = exampleStr.packed(withEncoding: .ascii)
#endiflet packedArray = exampleArray.packed()
//You can use this operator +^ the join the data on rhs to the end of data on lhslet msgPackedBytes: NSData = packedInt +^ packedStr +^ packedArray
Unpack
YSMessagePack offer a number of different ways and options to unpack include unpack asynchronously, see the example project for detail.
To unpack a messagepacked bytearray is pretty easy:
do {
//The unpack method will return an array of NSData which each element is an unpacked objectlet unpackedItems =try msgPackedBytes.itemsUnpacked()
//instead of casting the NSData to the type you want, you can call these `.castTo..` methods to do the job for youlet int:Int= unpackedItems[2].castToInt()
//Same as packing, you can also specify the encoding you want to use, default is ASCIIlet str:String= unpackedItem[3].castToString()
let array: NSArray = unpackedItems[4].castToArray()
} catchlet error as NSError{
NSLog("Error occurs during unpacking: %@", error)
}
//Remember how to pack your struct? Here is a better way to unpack a stream of bytes formatted in specific formatlet testObj1 =MyStruct(name: "TestObject1", index: 1)
let testObj2 =MyStruct(name: "TestObject2", index: 2)
let testObj3 =MyStruct(name: "TestObject3", index: 3)
let packed =packCustomObjects(testObj1, testObj2, testObj3) //This is an other method that can pack your own struct easierlet nobjsInOneGroup =2try! packed.unpackByGroupsWith(nobjsInOneGroup) {
(unpackedData, isLast) ->Bool//you can also involve additional args like number of groups to unpackguardlet name = unpackedData[0].castToString() else {returnfalse} //abort unpacking hen something wronglet index = unpackedData[1]
let testObj =MyStruct(name: name, index: index) // assembly returntrue//proceed unpacking, or return false to abort
}
If you don't want to unpack every single thing included in the message-pack byte array, you can also specify an amount to unpack, if you want to keep the remaining bytes, you can put true in the returnRemainingBytes argument, the remaining bytes will stored in the end of the NSData array.
do {
//Unpack only 2 objects, and we are not interested in remaining byteslet unpackedItems =try msgPackedBytes.itemsUnpacked(specific_amount: 2, returnRemainingBytes: false)
print(unpackedItems.count) //will print 2
} catchlet error as NSError{
NSLog("Error occurs during unpacking: %@", error)
}
This library is a lightweight implementation of the MessagePack binary serialization format. MessagePack is a 1-to-1 binary representation of JSON, and the official specification can be found here: https://github.com/msgpack....
This library is designed to be super light weight.
Its easiest to understand how this library works if you think in terms of json. The type MPackMap represents a dictionary, and the type MPackArray represents an array.
Create MPack instances with the static method MPack.From(object);. You can pass any simple type (such as string, integer, etc), or any Array composed of a simple type. MPack also has implicit conversions from most of the basic types built in.
Transform an MPack object back into a CLR type with the static method MPack.To<T>(); or MPack.To(type);. MPack also has explicit converions going back to most basic types, you can do string str = (string)mpack; for instance.
MPack now supports native asynchrounous reading and cancellation tokens. It will not block a thread to wait on a stream.
NuGet
MPack is available as a NuGet package!
PM> Install-Package MPack
Usage
Create a object model that can be represented as MsgPack. Here we are creating a dictionary, but really it can be anything:
Serialize the data to a byte array or to a stream to be saved, transmitted, etc:
byte[] encodedBytes=dictionary.EncodeToBytes();
// -- or --dictionary.EncodeToStream(stream);
Parse the binary data back into a MPack object model (you can also cast back to an MPackMap or MPackArray after reading if you want dictionary/array methods):
varreconstructed=MPack.ParseFromBytes(encodedBytes);
// -- or --varreconstructed=MPack.ParseFromStream(stream);
Turn MPack objects back into types that we understand with the generic To<>() method. Since we know the types of everything here we can just call To<bool>() to reconstruct our bool, but if you don't know you can access the instance enum MPack.ValueType to know what kind of value it is:
This Arduino library provides a light weight serializer and parser for messagepack.
Install
Download the zip, and import it with your Arduino IDE: Sketch>Include Library>Add .zip library
Usage
See the either the .h file, or the examples (led_controller and test_uno_writer).
In short:
functions like msgpck_what_next(Stream * s); watch the next type of data without reading it (without advancing the buffer of Stream s).
functions like msgpck_read_bool(Stream * s, bool *b) read a value from Stream s.
functions like msgpck_write_bool(Stream * s, bool b) write a value on Stream s.
Notes:
Stream are used as much as possible in order not to add to much overhead with buffers. Therefore you should be able to store the minimum number of value at a given time.
Map and Array related functions concern only their headers. Ex: If you want to write an array containing two elements you should write the array header, then write the two elements.
Limitations
Currently the library does not support:
8 bytes float (Only 4 bytes floats are supported by default on every Arduino and floats are anyway not recommended on Arduino)
The usage of MsgPack class is very simple. You need create an object and call read and write methods.
```actionscript
// message pack object created
var msgpack:MsgPack = new MsgPack();
// encode an array
var bytes:ByteArray = msgpack.write([1, 2, 3, 4, 5]);
// rewind the buffer
bytes.position = 0;
// print the decoded object
trace(msgpack.read(bytes));
### Flags
<p>Currently there are three flags which you may use to initialize a MsgPack object:</p>
* <code>MsgPackFlags.READ_STRING_AS_BYTE_ARRAY</code>: message pack string data is read as byte array instead of string;
* <code>MsgPackFlags.ACCEPT_LITTLE_ENDIAN</code>: MsgPack objects will work with little endian buffers (message pack specification defines big endian as default).
* <code>MsgPackFlags.SPEC2013_COMPATIBILITY</code>: MsgPack will run in backwards compatibility mode.
```actionscript
var msg:MsgPack;
// use logical operator OR to set the flags.
msgpack = new MsgPack(MsgPackFlags.READ_STRING_AS_BYTE_ARRAY | MsgPackFlags.ACCEPT_LITTLE_ENDIAN);
Advanced Usage
Extensions
You can create your own Extension Workers by extending the ExtensionWorker Class and then assigning it to the MsgPack Factory.
The following example assigns a custom worker which extends the ExtensionWorker Class.
```actionscript
var msgpack:MsgPack = new MsgPack();
// Assign the new worker to the factory.
msgpack.factory.assign(new CustomWorker());
<p>For more information regarding Extensions refer to the MessagePack specification.</p>
### Priorities
<p>Worker priority behaves similar to how the Adobe Event Dispatcher priorities work. In MessagePack, deciding which worker will be use for serializing/deserializing depends on two(2) factors.</p>
1. The order in which the worker was assigned to the factory.
2. The priority of the worker. Higher values take precedence.
All workers have a default priority of 0.
<p>In the following example <code>workerB</code> will never be used because it's assign after <code>workerA</code></p>
```actionscript
var msgpack:MsgPack = new MsgPack();
var workerA:StringWorker = new StringWorker();
var workerB:DifferentStringWorker = new DifferentStringWorker();
msgpack.factory.assign(workerA);
msgpack.factory.assign(workerB);
However if we adjust the priority of workerB, then workerA will never be used.
```actionscript
var msgpack:MsgPack = new MsgPack();
var workerA:StringWorker = new StringWorker();
var workerB:DifferentStringWorker = new DifferentStringWorker(null, 1);
## Credits
This application uses Open Source components. You can find the source code of their open source projects along with license information below. We acknowledge and are grateful to these developers for their contributions to open source.
Project: as3-msgpack https://github.com/loteixeira/as3-msgpack
Copyright (C) 2013 Lucas Teixeira
License (Apache V2.0) http://www.apache.org/licenses/LICENSE-2.0
Convert to and from msgpack objects in R using the official msgpack-c API through Rcpp.
A flowchart describing the conversion of R objects into msgpack objects and back.
Msgpack EXT types are converted to raw vectors with EXT attributes containing the extension type. The extension type must be an integer from 0 to 127.
Maps are converted to data.frames with additional class "map". Map objects in R contain key and value list columns and can be simplified to named lists or named vectors. The helper function msgpack_map creates map objects that can be serialized into msgpack.
msgpack11 is a tiny MsgPack library for C++11, providing MsgPack parsing and serialization.
This library is inspired by json11.
The API of msgpack11 is designed to be similar with json11.
Installation
Using CMake
git clone [email protected]:ar90n/msgpack11.git
mkdir build
cd build
cmake ../msgpack11
make && make install
Using Buck
git clone [email protected]:ar90n/msgpack11.git
cd msgpack11
buck build :msgpack11
Data::MessagePack - Perl 6 implementation of MessagePack
SYNOPSIS
use Data::MessagePack;
my $data-structure = {
key => 'value',
k2 => [ 1, 2, 3 ]
};
my $packed = Data::MessagePack::pack( $data-structure );
my $unpacked = Data::MessagePack::unpack( $packed );
Or for streaming:
use Data::MessagePack::StreamingUnpacker;
my $supplier = Some Supplier; #Could be from IO::Socket::Async for instance
my $unpacker = Data::MessagePack::StreamingUnpacker.new(
source => $supplier.Supply
);
$unpacker.tap( -> $value {
say "Got new value";
say $value.perl;
}, done => { say "Source supply is done"; } );
DESCRIPTION
The present module proposes an implemetation of the MessagePack specification as described on http://msgpack.org/. The implementation is now in Pure Perl which could come as a performance penalty opposed to some other packer implemented in C.
WHY THAT MODULE
There are already some part of MessagePack implemented in Perl6, with for instance MessagePack available here: https://github.com/uasi/messagepack-pm6, however that module only implements the unpacking part of the specification. Futhermore, that module uses the unpack functionality which is tagged as experimental as of today
FUNCTIONS
function pack
That function takes a data structure as parameter, and returns a Blob with the packed version of the data structure.
function unpack
That function takes a MessagePack packed message as parameter, and returns the deserialized data structure.
This is a command line tool to inspect/show a data serialized by MessagePack.
Installation
Executable binary files are available from releases. Download a file for your platform, and use it.
Otherwise, you can install rubygem version on your CRuby runtime:
$ gem install msgpack-inspect
Usage
Usage: msgpack-inspect [options] FILE
Options:
-f, --format FORMAT output format of inspection result (yaml/json/jsonl) [default: yaml]
-r, --require LIB ruby file path to require (to load ext type definitions)
-v, --version Show version of this software
-h, --help Show this message
-r option is available oly with rubygem version, and unavailable with mruby binary release.
FILE is a file which msgpack binary stored. Specify - to inspect data from STDIN.
This command shows the all data contained in specified format (YAML in default).
MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it's faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.
let hey =MessagePack("hey there!")
let bytes =try MessagePack.encode(hey)
let original =String(try MessagePack.decode(bytes: bytes))
Stream API
let hey =MessagePack("hey there!")
let stream =BufferedStream(stream: NetworkStream(socket: client))
try MessagePack.encode(hey, to: stream)
try stream.flush()
let original =String(try MessagePack.decode(from: stream))
Performance optimized
let output =OutputByteStream()
var encoder =MessagePackWriter(output)
try encoder.encode("one")
try encoder.encode(2)
try encoder.encode(3.0)
let encoded = output.bytesvar decoder =MessagePackReader(InputByteStream(encoded))
let string =try decoder.decode(String.self)
let int =try decoder.decode(UInt8.self)
let double =try decoder.decode(Double.self)
print("decoded manually: \(string), \(int), \(double)")
CWPack is a lightweight and yet complete implementation of the
MessagePack serialization format
version 5.
Excellent Performance
Together with MPack, CWPack is the fastest open-source messagepack implementation. Both totally outperform
CMP and msgpack-c
Design
CWPack does no memory allocations and no file handling. All that is done
outside of CWPack.
CWPack is working against memory buffers. User defined handlers are called when buffers are
filled up (packing) or needs refill (unpack).
Containers (arrays, maps) are read/written in parts, first the item containing the size and
then the contained items one by one. Exception to this is the cw_skip_items function which
skip whole containers.
Example
Pack and unpack example from the MessagePack home page:
CWPack may be run in compatibility mode. It affects only packing; EXT is considered illegal, BIN are transformed to STR and generation of STR8 is supressed.
Error handling
When an error is detected in a context, the context is stopped and all future calls to that context are immediatly returned without any actions.
CWPack does not check for illegal values (e.g. in STR for illegal unicode characters).
Build
CWPack consists of a single src file and two header files. It is written
in strict ansi C and the files are together ~ 1.2K lines. No separate build is neccesary, just include the
files in your own build.
CWPack has no dependencies to other libraries.
Test
Included in the test folder are a module test and a performance test and shell scripts to run them.
MessagePack for C# (.NET, .NET Core, Unity, Xamarin)
The extremely fast MessagePack serializer for C#. It is 10x faster than MsgPack-Cli and outperforms other C# serializers. MessagePack for C# also ships with built-in support for LZ4 compression - an extremely fast compression algorithm. Performance is important, particularly in applications like game development, distributed computing, microservice architecture, and caching.
for Unity, download from releases page, providing .unitypackage. Unity IL2CPP or Xamarin AOT Environment, check the pre-code generation section.
Quick Start
Define class and mark as [MessagePackObject] and public members(property or field) mark as [Key], call MessagePackSerializer.Serialize<T>/Deserialize<T>. ToJson helps dump binary.
// mark MessagePackObjectAttribute
[MessagePackObject]
publicclassMyClass
{
// Key is serialization index, it is important for versioning.
[Key(0)]
publicintAge { get; set; }
[Key(1)]
publicstringFirstName { get; set; }
[Key(2)]
publicstringLastName { get; set; }
// public members and does not serialize target, mark IgnoreMemberttribute
[IgnoreMember]
publicstringFullName { get { returnFirstName+LastName; } }
}
classProgram
{
staticvoidMain(string[] args)
{
varmc=newMyClass
{
Age=99,
FirstName="hoge",
LastName="huga",
};
// call Serialize/Deserialize, that's all.varbytes=MessagePackSerializer.Serialize(mc);
varmc2=MessagePackSerializer.Deserialize<MyClass>(bytes);
// you can dump msgpack binary to human readable json.// In default, MeesagePack for C# reduce property name information.// [99,"hoge","huga"]varjson=MessagePackSerializer.ToJson(bytes);
Console.WriteLine(json);
}
}
MessagePackAnalyzer helps object definition. Attributes, accessibility etc are detected and it becomes a compiler error.
If you want to allow a specific type (for example, when registering a custom type), put MessagePackAnalyzer.json at the project root and make the Build Action to AdditionalFiles.
This is a sample of the contents of MessagePackAnalyzer.json.
You can add custom type support and has some official/third-party extension package. for ImmutableCollections(ImmutableList<>, etc), for ReactiveProperty and for Unity(Vector3, Quaternion, etc...), for F#(Record, FsList, Discriminated Unions, etc...). Please see extensions section.
MessagePack.Nil is built-in null/void/unit representation type of MessagePack for C#.
Object Serialization
MessagePack for C# can serialze your own public Class or Struct. Serialization target must marks [MessagePackObject] and [Key]. Key type can choose int or string. If key type is int, serialized format is used array. If key type is string, serialized format is used map. If you define [MessagePackObject(keyAsPropertyName: true)], does not require KeyAttribute.
All patterns serialization target are public instance member(field or property). If you want to avoid serialization target, you can add [IgnoreMember] to target member.
target class must be public, does not allows private, internal class.
Which should uses int key or string key? I recommend use int key because faster and compact than string key. But string key has key name information, it is useful for debugging.
MessagePackSerializer requests target must put attribute is for robustness. If class is grown, you need to be conscious of versioning. MessagePackSerializer uses default value if key does not exists. If uses int key, should be start from 0 and should be sequential. If unnecessary properties come out, please make a missing number. Reuse is bad. Also, if Int Key's jump number is too large, it affects binary size.
I want to use like JSON.NET! I don't want to put attribute! If you think that way, you can use a contractless resolver.
publicclassContractlessSample
{
publicintMyProperty1 { get; set; }
publicintMyProperty2 { get; set; }
}
vardata=newContractlessSample { MyProperty1=99, MyProperty2=9999 };
varbin=MessagePackSerializer.Serialize(data, MessagePack.Resolvers.ContractlessStandardResolver.Instance);
// {"MyProperty1":99,"MyProperty2":9999}Console.WriteLine(MessagePackSerializer.ToJson(bin));
// You can set ContractlessStandardResolver as default.MessagePackSerializer.SetDefaultResolver(MessagePack.Resolvers.ContractlessStandardResolver.Instance);
// serializable.varbin2=MessagePackSerializer.Serialize(data);
I want to serialize private member! In default, can not serialize/deserialize private members. But you can use allow-private resolver.
[MessagePackObject]
publicclassPrivateSample
{
[Key(0)]
intx;
publicvoidSetX(intv)
{
x=v;
}
publicintGetX()
{
returnx;
}
}
vardata=newPrivateSample();
data.SetX(9999);
// You can choose StandardResolverAllowPrivate or ContractlessStandardResolverAllowPrivatevarbin=MessagePackSerializer.Serialize(data, MessagePack.Resolvers.DynamicObjectResolverAllowPrivate.Instance);
I don't need type, I want to use like BinaryFormatter! You can use as typeless resolver and helpers. Please see Typeless section.
Resolver is key customize point of MessagePack for C#. Details, please see extension point.
DataContract compatibility
You can use [DataContract] instead of [MessagePackObject]. If type is marked DataContract, you can use [DataMember] instead of [Key] and [IgnoreDataMember] instead of [IgnoreMember].
[DataMember(Order = int)] is same as [Key(int)], [DataMember(Name = string)] is same as [Key(string)]. If use [DataMember], same as [Key(nameof(propertyname)].
Using DataContract makes it a shared class library and you do not have to refer to MessagePack for C#. However, it is not included in analysis by Analyzer or code generation by mpc.exe. Also, functions like UnionAttribute, MessagePackFormatterAttribute, SerializationConstructorAttribute etc can not be used. For this reason, I recommend that you use the MessagePack for C# attribute basically.
MessagePackSerializer choose constructor with the least matched argument, match index if key in integer or match name(ignore case) if key is string. If encounts MessagePackDynamicObjectResolverException: can't find matched constructor parameter you should check about this.
If can not match automatically, you can specify to use constructor manually by [SerializationConstructorAttribute].
[MessagePackObject]
publicstructPoint
{
[Key(0)]
publicreadonlyintX;
[Key(1)]
publicreadonlyintY;
// If not marked attribute, used this(least matched argument)publicPoint(intx)
{
X=x;
}
[SerializationConstructor]
publicPoint(intx, inty)
{
this.X=x;
this.Y=y;
}
}
Serialization Callback
If object implements IMessagePackSerializationCallbackReceiver, received OnBeforeSerialize and OnAfterDeserialize on serilization process.
MessagePack for C# supports serialize interface. It is like XmlInclude or ProtoInclude. MessagePack for C# there called Union. UnionAttribute can only attach to interface or abstract class. It requires discriminated integer key and sub-type.
C# 7.0 type-switch is best match for Union. Union is serialized to two-length array.
IUnionSampledata=newBarClass { OPQ="FooBar" };
varbin=MessagePackSerializer.Serialize(data);
// Union is serialized to two-length array, [key, object]// [1,["FooBar"]]Console.WriteLine(MessagePackSerializer.ToJson(bin));
Using Union in Abstract Class, you can use same of interface.
Serialization of inherited type, flatten in array(or map), be carefult to integer key, it cannot duplicate parent and all childrens.
Dynamic(Untyped) Deserialization
If use MessagePackSerializer.Deserialize<object> or MessagePackSerializer.Deserialize<dynamic>, convert messagepack binary to primitive values that convert from msgpack-primitive to bool, char, sbyte, byte, short, int, long, ushort, uint, ulong, float, double, DateTime, string, byte[], object[], IDictionary<object, object>.
When deserializing, same as Dynamic(Untyped) Deserialization.
Typeless
Typeless API is like BinaryFormatter, embed type information to binary so no needs type to deserialize.
objectmc=newSandbox.MyClass()
{
Age=10,
FirstName="hoge",
LastName="huga"
};
// serialize to typelessvarbin=MessagePackSerializer.Typeless.Serialize(mc);
// binary data is embeded type-assembly information.// ["Sandbox.MyClass, Sandbox",10,"hoge","huga"]Console.WriteLine(MessagePackSerializer.ToJson(bin));
// can deserialize to MyClass with typelessvarobjModel=MessagePackSerializer.Typeless.Deserialize(bin) asMyClass;
Type information is serialized by mspgack ext format, typecode is 100.
MessagePackSerializer.Typeless is shortcut of Serialize/Deserialize<object>(TypelessContractlessStandardResolver.Instance). If you want to configure default typeless resolver, you can set by MessagePackSerializer.Typeless.RegisterDefaultResolver.
TypelessFormatter can use standalone and combinate with existing resolvers.
Benchmarks comparing to other serializers run on Windows 10 Pro x64 Intel Core i7-6700K 4.00GHz, 32GB RAM. Benchmark code is here - and there version info, ZeroFormatter and FlatBuffers has infinitely fast deserializer so ignore deserialize performance.
MessagePack for C# uses many techniques for improve performance.
Serializer uses only ref byte[] and int offset, don't use (Memory)Stream(call Stream api has overhead)
High-level API uses internal memory pool, don't allocate working memory under 64K
Avoid string key decode for lookup map(string key) key and uses automata based name lookup with il inlining code generation, see: AutomataDictionary
For string key encode, pre-generated member name bytes and use fixed sized binary copy in IL, see: UnsafeMemory.cs
Before creating this library, I implemented a fast fast serializer with ZeroFormatter#Performance. And this is a further evolved implementation. MessagePack for C# is always fast, optimized for all types(primitive, small struct, large object, any collections).
Deserialize Performance per options
Performance varies depending on options. This is a micro benchamark with BenchmarkDotNet. Target object has 9 members(MyProperty1 ~ MyProperty9), value are zero.
Method
Mean
Error
Scaled
Gen 0
Allocated
IntKey
72.67 ns
NA
1.00
0.0132
56 B
StringKey
217.95 ns
NA
3.00
0.0131
56 B
Typeless_IntKey
176.71 ns
NA
2.43
0.0131
56 B
Typeless_StringKey
378.64 ns
NA
5.21
0.0129
56 B
MsgPackCliMap
1,355.26 ns
NA
18.65
0.1431
608 B
MsgPackCliArray
455.28 ns
NA
6.26
0.0415
176 B
ProtobufNet
265.85 ns
NA
3.66
0.0319
136 B
Hyperion
366.47 ns
NA
5.04
0.0949
400 B
JsonNetString
2,783.39 ns
NA
38.30
0.6790
2864 B
JsonNetStreamReader
3,297.90 ns
NA
45.38
1.4267
6000 B
JilString
553.65 ns
NA
7.62
0.0362
152 B
JilStreamReader
1,408.46 ns
NA
19.38
0.8450
3552 B
IntKey, StringKey, Typeless_IntKey, Typeless_StringKey are MessagePack for C# options. All MessagePack for C# options achive zero memory allocation on deserialization process. JsonNetString/JilString is deserialized from string. JsonNetStreamReader/JilStreamReader is deserialized from UTF8 byte[] with StreamReader. Deserialization is normally read from Stream. Thus, it will be restored from byte[](or Stream) instead of string.
MessagePack for C# IntKey is fastest. StringKey is slower than IntKey because matching from the character string is required. If IntKey, read array length, for(array length) { binary decode }. If StringKey, read map length, for(map length) { decode key, lookup by key, binary decode } so requires additional two steps(decode key and lookup by key).
String key is often useful, contractless, simple replacement of JSON, interoperability with other languages, and more certain versioning. MessagePack for C# is also optimized for String Key. First of all, it do not decode UTF8 byte[] to String for matching with the member name, it will look up the byte[] as it is(avoid decode cost and extra allocation).
And It will try to match each long type (per 8 character, if it is not enough, pad with 0) using automata and inline it when IL code generating.
This also avoids calculating the hash code of byte[], and the comparison can be made several times on a long unit.
This is the sample decompile of generated deserializer code by ILSpy.
If the number of nodes is large, search with a embedded binary search.
Extra note, this is serialize benchmark result.
Method
Mean
Error
Scaled
Gen 0
Allocated
IntKey
84.11 ns
NA
1.00
0.0094
40 B
StringKey
126.75 ns
NA
1.51
0.0341
144 B
Typeless_IntKey
183.31 ns
NA
2.18
0.0265
112 B
Typeless_StringKey
193.95 ns
NA
2.31
0.0513
216 B
MsgPackCliMap
967.68 ns
NA
11.51
0.1297
552 B
MsgPackCliArray
284.20 ns
NA
3.38
0.1006
424 B
ProtobufNet
176.43 ns
NA
2.10
0.0665
280 B
Hyperion
280.14 ns
NA
3.33
0.1674
704 B
ZeroFormatter
149.95 ns
NA
1.78
0.1009
424 B
JsonNetString
1,432.55 ns
NA
17.03
0.4616
1944 B
JsonNetStreamWriter
1,775.72 ns
NA
21.11
1.5526
6522 B
JilString
547.51 ns
NA
6.51
0.3481
1464 B
JilStreamWriter
778.78 ns
NA
9.26
1.4448
6066 B
Of course, IntKey is fastest but StringKey also good.
LZ4 Compression
MessagePack is a fast and compact format but it is not compression. LZ4 is extremely fast compression algorithm, with MessagePack for C# can achive extremely fast perfrormance and extremely compact binary size!
MessagePack for C# has built-in LZ4 support. You can use LZ4MessagePackSerializer instead of MessagePackSerializer. Builtin support is special, I've created serialize-compression pipeline and special tuned for the pipeline so share the working memory, don't allocate, don't resize until finished.
Serialized binary is not simply compressed lz4 binary. Serialized binary is valid MessagePack binary used ext-format and custom typecode(99).
vararray=Enumerable.Range(1, 100).Select(x=>newMyClass { Age=5, FirstName="foo", LastName="bar" }).ToArray();
// call LZ4MessagePackSerializer instead of MessagePackSerializer, api is completely samevarlz4Bytes=LZ4MessagePackSerializer.Serialize(array);
varmc2=LZ4MessagePackSerializer.Deserialize<MyClass[]>(lz4Bytes);
// you can dump lz4 message pack// [[5,"hoge","huga"],[5,"hoge","huga"],....]varjson=LZ4MessagePackSerializer.ToJson(lz4Bytes);
Console.WriteLine(json);
// lz4Bytes is valid MessagePack, it is using ext-format( [TypeCode:99, SourceLength|CompressedBinary] )// [99,"0gAAA+vf3ABkkwWjZm9vo2JhcgoA////yVBvo2Jhcg=="]varrawJson=MessagePackSerializer.ToJson(lz4Bytes);
Console.WriteLine(rawJson);
built-in LZ4 support uses primitive LZ4 API. The LZ4 API is more efficient if you know the size of original source length. Therefore, size is written on the top.
Compression speed is not always fast. Depending on the target binary, it may be short or longer. However, even at worst, it is about twice, but it is still often faster than other uncompressed serializers.
If target binary size under 64 bytes, LZ4MessagePackSerializer does not compress to optimize small size serialization.
Compare with protobuf, JSON, ZeroFormatter
protbuf-net is major, most used binary-format library on .NET. I love protobuf-net and respect that great work. But if uses protobuf-net for general-purpose serialization format, you may encounts annoying issue.
protobuf(-net) can not handle null and empty collection correctly. Because protobuf has no null representation( this is the protobuf-net authors answer).
MessagePack specification can completely serialize C# type system. This is the reason to recommend MessagePack over protobuf.
Protocol Buffers has good IDL and gRPC, that is a much good point than MessagePack. If you want to use IDL, I recommend Google.Protobuf than MessagePack.
JSON is good general-purpose format. It is perfect, simple and enough spec. Utf8Json which created me that adopts same architecture as MessagePack for C# and avoid encoding/decoing cost so work like binary. If you want to know about binary vs text, see Utf8Json/which serializer should be used section.
ZeroFormatter is similar as FlatBuffers but specialized to C#. It is special. Deserialization is infinitely fast but instead the binary size is large. And ZeroFormatter's caching algorithm requires additional memory.
Again, ZeroFormatter is special. When situation matches with ZeroFormatter, it demonstrates power of format. But for many common uses, MessagePack for C# would be better.
Extensions
MessagePack for C# has extension point and you can add external type's serialization support. There are official extension support.
MessagePack.ImmutableCollection package add support for System.Collections.Immutable library. It adds ImmutableArray<>, ImmutableList<>, ImmutableDictionary<,>, ImmutableHashSet<>, ImmutableSortedDictionary<,>, ImmutableSortedSet<>, ImmutableQueue<>, ImmutableStack<>, IImmutableList<>, IImmutableDictionary<,>, IImmutableQueue<>, IImmutableSet<>, IImmutableStack<> serialization support.
MessagePack.ReactiveProperty package add support for ReactiveProperty library. It adds ReactiveProperty<>, IReactiveProperty<>, IReadOnlyReactiveProperty<>, ReactiveCollection<>, Unit serialization support. It is useful for save viewmodel state.
MessagePack.UnityShims package provides shim of Unity's standard struct(Vector2, Vector3, Vector4, Quaternion, Color, Bounds, Rect, AnimationCurve, Keyframe, Matrix4x4, Gradient, Color32, RectOffset, LayerMask, Vector2Int, Vector3Int, RangeInt, RectInt, BoundsInt) and there formatter. It can enable to commnicate between server and Unity client.
After install, extension package must enable by configuration. Here is sample of enable all extension.
// set extensions to default resolver.MessagePack.Resolvers.CompositeResolver.RegisterAndSetAsDefault(
// enable extension packages firstImmutableCollectionResolver.Instance,
ReactivePropertyResolver.Instance,
MessagePack.Unity.Extension.UnityBlitResolver.Instance,
MessagePack.Unity.UnityResolver.Instance,
// finaly use standard(default) resolverStandardResolver.Instance);
);
MessagePackSerializer is the entry point of MessagePack for C#. Its static methods are main API of MessagePack for C#.
API
Description
DefaultResolver
FormatterResolver that used resolver less overloads. If does not set it, used StandardResolver.
SetDefaultResolver
Set default resolver of MessagePackSerializer APIs.
Serialize<T>
Convert object to byte[] or write to stream. There has IFormatterResolver overload, used specified resolver.
SerializeUnsafe<T>
Same as Serialize<T> but return ArraySegement<byte>. The result of ArraySegment is contains internal buffer pool, it can not share across thread and can not hold, so use quickly.
Deserialize<T>
Convert byte[] or ArraySegment<byte> or stream to object. There has IFormatterResolver overload, used specified resolver.
NonGeneric.*
NonGeneric APIs of Serialize/Deserialize. There accept type parameter at first argument. This API is bit slower than generic API but useful for framework integration such as ASP.NET formatter.
Typeless.*
Typeless APIs of Serialize/Deserialize. This API no needs type parameter like BinaryFormatter. This API makes .NET specific binary and bit slower than standard APIs.
ToJson
Dump message-pack binary to JSON string. It is useful for debugging.
FromJson
From Json string to MessagePack binary.
ToLZ4Binary
(LZ4 only)Convert msgpack binary to LZ4 msgpack binary.
Decode
(LZ4 only)Convert LZ4 msgpack binary to standard msgpack binary.
MessagePack for C# operates at the byte[] level, so byte[] API is faster than Stream API. If byte [] can be used for I/O, I recommend using the byte [] API.
Deserialize<T>(Stream) has bool readStrict overload. It means read byte[] from stream strictly size. The default is false, it reads all stream data, it is faster than readStrict but if the data is contiguous, you can use readStrict = true.
High-Level API uses memory pool internaly to avoid unnecessary memory allocation. If result size is under 64K, allocates GC memory only for the return bytes.
LZ4MessagePackSerializer has same api with MessagePackSerializer and DefaultResolver is shared. LZ4MessagePackSerializer has additional SerializeToBlock method.
Low-Level API(IMessagePackFormatter)
IMessagePackFormatter is serializer by each type. For example Int32Formatter : IMessagePackFormatter<Int32> represents Int32 MessagePack serializer.
All api works on byte[] level, no use Stream, no use Writer/Reader so improve performance. Many builtin formatters exists under MessagePack.Formatters. You can get sub type serializer by formatterResolver.GetFormatter<T>. Here is sample of write own formatter.
MessagePackBinary is most low-level API like Reader/Writer of other serializers. MessagePackBinary is static class because avoid create Reader/Writer allocation.
Skip MessagePackFormat binary block with sub structures(array/map), returns read size. This is useful for create deserializer.
ReadMessageBlockFromStreamUnsafe
Read binary block from Stream, if readOnlySingleMessage = false then read sub structures(array/map).
ReadStringSegment
Read string format but do not decode UTF8, returns ArraySegment<byte>.
ReadBytesSegment
Read binary format but do not copy bytes, returns ArraySegment<byte>.
Write/ReadMapHeader
Write/Read map format header(element length).
WriteMapHeaderForceMap32Block
Write map format header, always use map32 format(length is fixed, 5).
Write/ReadArrayHeader
Write/Read array format header(element length).
WriteArrayHeaderForceArray32Block
Write array format header, always use array32 format(length is fixed, 5).
Write/Read***
*** is primitive type name(Int32, Single, String, etc...)
Write***Force***Block
*** is primitive integer name(Byte, Int32, UInt64, etc...), acquire strict block and write code
Write/ReadBytes
Write/Read byte[] to use bin format.
Write/ReadExtensionFormat
Write/Read ext format header(Length + TypeCode) and content byte[].
Write/ReadExtensionFormatHeader
Write/Read ext format, header(Length + TypeCode) only.
WriteExtensionFormatHeaderForceExt32Block
Write ext format header, always use ext32 format(length is fixed, 6).
WriteRaw
Write msgpack block directly.
IsNil
Is TypeCode Nil?
GetMessagePackType
Return MessagePackType of target MessagePack bianary position.
GetExtensionFormatHeaderLength
Calculate extension formatter header length.
GetEncodedStringBytes
Get msgpack packed raw binary.
EnsureCapacity
Resize if byte can not fill.
FastResize
Buffer.BlockCopy version of Array.Resize.
FastCloneWithResize
Same as FastResize but return copied byte[].
Read API returns deserialized primitive and read size. Write API returns write size and guranteed auto ensure ref byte[]. Write/Read API has byte[] overload and Stream overload, basically the byte[] API is faster.
DateTime is serialized to MessagePack Timestamp format, it serialize/deserialize UTC and loses Kind info. If you useNativeDateTimeResolver serialized native DateTime binary format and it can keep Kind info but cannot communicate other platforms.
MessagePackCode means msgpack format of first byte. Its static class has ToMessagePackType and ToFormatName utility methods.
MessagePackRange means Min-Max fix range of msgpack format.
Extension Point(IFormatterResolver)
IFormatterResolver is storage of typed serializers. Serializer api accepts resolver and can customize serialization.
Resovler Name
Description
BuiltinResolver
Builtin primitive and standard classes resolver. It includes primitive(int, bool, string...) and there nullable, array and list. and some extra builtin types(Guid, Uri, BigInteger, etc...).
StandardResolver
Composited resolver. It resolves in the following order builtin -> attribute -> dynamic enum -> dynamic generic -> dynamic union -> dynamic object -> dynamic object fallback. This is the default of MessagePackSerializer.
Same as StandardResolver but allow serialize/deserialize private members.
ContractlessStandardResolverAllowPrivate
Same as ContractlessStandardResolver but allow serialize/deserialize private members.
PrimitiveObjectResolver
MessagePack primitive object resolver. It is used fallback in object type and supports bool, char, sbyte, byte, short, int, long, ushort, uint, ulong, float, double, DateTime, string, byte[], ICollection, IDictionary.
DynamicObjectTypeFallbackResolver
Serialize is used type in from object type, deserialize is used PrimitiveObjectResolver.
AttributeFormatterResolver
Get formatter from [MessagePackFormatter] attribute.
CompositeResolver
Singleton helper of setup custom resolvers. You can use Register or RegisterAndSetAsDefault API.
NativeDateTimeResolver
Serialize by .NET native DateTime binary format.
UnsafeBinaryResolver
Guid and Decimal serialize by binary representation. It is faster than standard(string) representation.
OldSpecResolver
str and bin serialize/deserialize follows old messagepack spec(use raw format)
DynamicEnumResolver
Resolver of enum and there nullable, serialize there underlying type. It uses dynamic code generation to avoid boxing and boostup performance serialize there name.
DynamicEnumAsStringResolver
Resolver of enum and there nullable. It uses reflection call for resolve nullable at first time.
DynamicGenericResolver
Resolver of generic type(Tuple<>, List<>, Dictionary<,>, Array, etc). It uses reflection call for resolve generic argument at first time.
DynamicUnionResolver
Resolver of interface marked by UnionAttribute. It uses dynamic code generation to create dynamic formatter.
DynamicObjectResolver
Resolver of class and struct maked by MessagePackObjectAttribute. It uses dynamic code generation to create dynamic formatter.
DynamicContractlessObjectResolver
Resolver of all classes and structs. It does not needs MessagePackObjectAttribute and serialized key as string(same as marked [MessagePackObject(true)]).
DynamicObjectResolverAllowPrivate
Same as DynamicObjectResolver but allow serialize/deserialize private members.
DynamicContractlessObjectResolverAllowPrivate
Same as DynamicContractlessObjectResolver but allow serialize/deserialize private members.
TypelessObjectResolver
Used for object, embed .NET type in binary by ext(100) format so no need to pass type in deserilization.
TypelessContractlessStandardResolver
Composited resolver. It resolves in the following order nativedatetime -> builtin -> attribute -> dynamic enum -> dynamic generic -> dynamic union -> dynamic object -> dynamiccontractless -> typeless. This is the default of MessagePackSerializer.Typeless
It is the only configuration point to assemble the resolver's priority. In most cases, it is sufficient to have one custom resolver globally. CompositeResolver will be its helper.
// use global-singleton CompositeResolver.// This method initialize CompositeResolver and set to default MessagePackSerializerCompositeResolver.RegisterAndSetAsDefault(
// resolver custom types firstImmutableCollectionResolver.Instance,
ReactivePropertyResolver.Instance,
MessagePack.Unity.Extension.UnityBlitResolver.Instance,
MessagePack.Unity.UnityResolver.Instance,
// finaly use standard resolverStandardResolver.Instance);
Here is sample of use DynamicEnumAsStringResolver with DynamicContractlessObjectResolver(It is JSON.NET-like lightweight setting.)
// composite same as StandardResolverCompositeResolver.RegisterAndSetAsDefault(
MessagePack.Resolvers.BuiltinResolver.Instance,
MessagePack.Resolvers.AttributeFormatterResolver.Instance,
// replace enum resolverMessagePack.Resolvers.DynamicEnumAsStringResolver.Instance,
MessagePack.Resolvers.DynamicGenericResolver.Instance,
MessagePack.Resolvers.DynamicUnionResolver.Instance,
MessagePack.Resolvers.DynamicObjectResolver.Instance,
MessagePack.Resolvers.PrimitiveObjectResolver.Instance,
// final fallback(last priority)MessagePack.Resolvers.DynamicContractlessObjectResolver.Instance
);
If you want to write custom composite resolver, you can write like following.
If you want to make your extension package, you need to make formatter and resolver. IMessagePackFormatter accepts IFormatterResolver on every request of serialize/deserialize. You can get child-type serialize on resolver.GetFormatterWithVerify<T>.
Here is sample of own resolver.
publicclassSampleCustomResolver : IFormatterResolver
{
// Resolver should be singleton.publicstaticIFormatterResolverInstance=newSampleCustomResolver();
SampleCustomResolver()
{
}
// GetFormatter<T>'s get cost should be minimized so use type cache.publicIMessagePackFormatter<T> GetFormatter<T>()
{
returnFormatterCache<T>.formatter;
}
staticclassFormatterCache<T>
{
publicstaticreadonlyIMessagePackFormatter<T> formatter;
// generic's static constructor should be minimized for reduce type generation size!// use outer helper method.staticFormatterCache()
{
formatter= (IMessagePackFormatter<T>)SampleCustomResolverGetFormatterHelper.GetFormatter(typeof(T));
}
}
}
internalstaticclassSampleCustomResolverGetFormatterHelper
{
// If type is concrete type, use type-formatter mapstaticreadonlyDictionary<Type, object> formatterMap=newDictionary<Type, object>()
{
{typeof(FileInfo), newFileInfoFormatter()}
// add more your own custom serializers.
};
internalstaticobjectGetFormatter(Typet)
{
objectformatter;
if (formatterMap.TryGetValue(t, outformatter))
{
returnformatter;
}
// If target type is generics, use MakeGenericType.if (t.IsGenericParameter&&t.GetGenericTypeDefinition() ==typeof(ValueTuple<,>))
{
returnActivator.CreateInstance(typeof(ValueTupleFormatter<,>).MakeGenericType(t.GenericTypeArguments));
}
// If type can not get, must return null for fallback mecanism.returnnull;
}
}
MessagePackFormatterAttribute
MessagePackFormatterAttribute is lightweight extension point of class, struct, interface, enum and property/field. This is like JSON.NET's JsonConverterAttribute. For example, serialize private field, serialize x10 formatter.
Formatter is retrieved by AttributeFormatterResolver, it is included in StandardResolver.
IgnoreFormatter
IgnoreFormatter<T> is lightweight extension point of class and struct, if exists can't serializable type in external type, you can register IgnoreFormatter<T> that serialize to nil.
// CompositeResolver can set custom formatter.MessagePack.Resolvers.CompositeResolver.RegisterAndSetAsDefault(
newIMessagePackFormatter[]
{
// for example, register reflection infos(can not serialize in default)newIgnoreFormatter<MethodBase>(),
newIgnoreFormatter<MethodInfo>(),
newIgnoreFormatter<PropertyInfo>(),
newIgnoreFormatter<FieldInfo>()
},
newIFormatterResolver[]
{
ContractlessStandardResolver.Instance
});
Reserved Extension Types
MessagePack for C# already used some messagepack ext type codes, be careful to use same ext code.
Code
Type
Use by
-1
DateTime
msgpack-spec reserved for timestamp
30
Vector2[]
for Unity, UnsafeBlitFormatter
31
Vector3[]
for Unity, UnsafeBlitFormatter
32
Vector4[]
for Unity, UnsafeBlitFormatter
33
Quaternion[]
for Unity, UnsafeBlitFormatter
34
Color[]
for Unity, UnsafeBlitFormatter
35
Bounds[]
for Unity, UnsafeBlitFormatter
36
Rect[]
for Unity, UnsafeBlitFormatter
37
Int[]
for Unity, UnsafeBlitFormatter
38
Float[]
for Unity, UnsafeBlitFormatter
39
Double[]
for Unity, UnsafeBlitFormatter
99
All
LZ4MessagePackSerializer
100
object
TypelessFormatter
for Unity
You can install by package and includes source code. If build target as PC, you can use as is but if build target uses IL2CPP, you can not use Dynamic***Resolver so use pre-code generation. Please see pre-code generation section.
In Unity, MessagePackSerializer can serialize Vector2, Vector3, Vector4, Quaternion, Color, Bounds, Rect, AnimationCurve, Keyframe, Matrix4x4, Gradient, Color32, RectOffset, LayerMask, Vector2Int, Vector3Int, RangeInt, RectInt, BoundsInt and there nullable, there array, there list by built-in extension UnityResolver. It is included StandardResolver by default.
MessagePack for C# has additional unsafe extension. UnsafeBlitResolver is special resolver for extremely fast unsafe serialization/deserialization for struct array.
x20 faster Vector3[] serialization than native JsonUtility. If use UnsafeBlitResolver, serialize special format(ext:typecode 30~39) Vector2[], Vector3[], Quaternion[], Color[], Bounds[], Rect[]. If use UnityBlitWithPrimitiveArrayResolver, supports int[], float[], double[] too. This special feature is useful for serialize Mesh(many Vector3[]) or many transform position.
If you want to use unsafe resolver, you must enables unsafe option and define additional symbols. At first, write -unsafe on smcs.rsp, gmcs.rsp etc. And define ENABLE_UNSAFE_MSGPACK symbol.
Here is sample of configuration.
Resolvers.CompositeResolver.RegisterAndSetAsDefault(
MessagePack.Unity.UnityResolver.Instance,
MessagePack.Unity.Extension.UnityBlitWithPrimitiveArrayResolver.Instance,
// If PC, use StandardResolver// MessagePack.Resolvers.StandardResolver.Instance,// If IL2CPP, Builtin + GeneratedResolver.// MessagePack.Resolvers.BuiltinResolver.Instance,
);
MessagePack.UnityShims NuGet package is for .NET ServerSide serialization support to communicate with Unity. It includes shim of Vector3 etc and Safe/Unsafe serialization extension.
If you want to share class between Unity and Server, you can use SharedProject or Reference as Link or new MSBuild(VS2017)'s wildcard reference etc. Anyway you need to source-code level share. This is sample project structure of use SharedProject.
SharedProject(source code sharing)
Source codes of server-client shared
ServerProject(.NET 4.6/.NET Core/.NET Standard)
[SharedProject]
[MessagePack]
[MessagePack.UnityShims]
ClientDllProject(.NET 3.5)
[SharedProject]
[MessagePack](not dll, use MessagePack.unitypackage's sourcecodes)
Unity
[Builded ClientDll]
Other ways, use plain POCO by DataContract/DataMember can use.
Pre Code Generation(Unity/Xamarin Supports)
MessagePack for C# generates object formatter dynamically by ILGenerator. It is fast and transparently generated at run time. But it needs generate cost at first time and it does not work on AOT environment(Xamarin, Unity IL2CPP, etc.).
Note: If Unity's build target as PC, does not need code generation. It works well.
If you want to avoid generate cost or run on Xamarin or Unity, you need pre-code generation. mpc.exe(MessagePackCompiler) is code generator of MessagePack for C#. mpc can download from releases page, mpc.zip. mpc is using Roslyn so analyze source code.
mpc arguments help:
-i, --input [required]Input path of analyze csproj
-o, --output [required]Output file path
-c, --conditionalsymbol [optional, default=empty]conditional compiler symbol
-r, --resolvername [optional, default=GeneratedResolver]Set resolver name
-n, --namespace [optional, default=MessagePack]Set namespace root name
-m, --usemapmode [optional, default=false]Force use map mode serialization
// Simple Sample:
mpc.exe -i "..\src\Sandbox.Shared.csproj" -o "MessagePackGenerated.cs"
// Use force map simulate DynamicContractlessObjectResolver
mpc.exe -i "..\src\Sandbox.Shared.csproj" -o "MessagePackGenerated.cs" -m
If you create DLL by msbuild project, you can use Pre/Post build event.
<PropertyGroup>
<PreBuildEvent>
mpc.exe, here is useful for analyze/generate target is self project.
</PreBuildEvent>
<PostBuildEvent>
mpc.exe, here is useful for analyze target is another project.
</PostBuildEvent>
</PropertyGroup>
In default, mpc.exe generates resolver to MessagePack.Resolvers.GeneratedResolver and formatters generates to MessagePack.Formatters.***. And application launch, you need to set Resolver at first.
// CompositeResolver is singleton helper for use custom resolver.// Ofcourse you can also make custom resolver.MessagePack.Resolvers.CompositeResolver.RegisterAndSetAsDefault(
// use generated resolver first, and combine many other generated/custom resolversMessagePack.Resolvers.GeneratedResolver.Instance,
// finally, use builtin/primitive resolver(don't use StandardResolver, it includes dynamic generation)MessagePack.Resolvers.BuiltinResolver.Instance,
MessagePack.Resolvers.AttributeFormatterResolver.Instance,
MessagePack.Resolvers.PrimitiveObjectResolver.Instance
);
Note: mpc.exe is basically run on only Windows. But you can run on Mono, that supports Mac and Linux.
RPC
MessagePack advocated MessagePack RPC, but formulation is stopped and it is not widely used. I've created gRPC based MessagePack HTTP/2 RPC streaming framework called MagicOnion. gRPC usually communicates with Protocol Buffers using IDL. But MagicOnion uses MessagePack for C# and does not needs IDL. If communicates C# to C#, schemaless(C# classes as schema) is better than IDL.
How to Build
Open MessagePack.sln on Visual Studio 2017.
Unity Project is using symbolic link. At first, run make_unity_symlink.bat so linked under Unity project. You can open src\MessagePack.UnityClient on Unity Editor.
Author Info
Yoshifumi Kawai(a.k.a. neuecc) is a software developer in Japan.
He is the Director/CTO at Grani, Inc.
Grani is a mobile game developer company in Japan and well known for using C#.
He is awarding Microsoft MVP for Visual C# since 2011.
He is known as the creator of UniRx(Reactive Extensions for Unity)
MessagePack.FSharpExtensions is a MessagePack-CSharp extension library for F#.
Usage
openMessagePackopenMessagePack.ResolversopenMessagePack.FSharp
CompositeResolver.RegisterAndSetAsDefault(
FSharpResolver.Instance,
StandardResolver.Instance
)
[<MessagePackObject>]typeUnionSample =| Foo of XYZ : int
| Bar of OPQ : string list
letdata= Foo 999letbin= MessagePackSerializer.Serialize(data)
match MessagePackSerializer.Deserialize<UnionSample>(bin) with| Foo x ->
printfn "%d" x
| Bar xs ->
printfn "%A" xs
This is a low-level @nogc, nothrow, @safe, pure and betterC compatible
MessagePack serializer and deserializer. The
library was designed to avoid any external dependencies and handle the low-level protocol
details only. As a result the library doesn't have to do any error handling or
buffer management. This library does never dynamically allocate memory.
import msgpack_ll;
// Buffer allocation is not handled by the libraryubyte[128] buffer;
// The MsgpackType enum contains all low-level MessagePack typesenum type = MsgpackType.uint8;
// The DataSize!(MsgpackType) function returns the size of serialized data// for a certain type.// The formatter and parser use ref ubyte[DataSize!type] types. This// forces the compiler to do array length checks at compile time and avoid// any runtime bounds checking.// Format the number 42 as a uint8 type. This will require// DataSize!(MsgpackType.uint8) == 2 bytes storage.
formatType!(type)(42, buffer[0..DataSize!type]);
// To deserialize we have to somehow get the data type at runtime// Then verify the type is as expected.assert(getType(buffer[0]) == type);
// Now deserialize. Here we have to specify the MsgpackType// as a compile time value.const result = parseType!type(buffer[0..DataSize!type]);
assert(result ==42);
A quick view at the generated code for this library
Serializing an 8 bit integer
voidformat(refubyte[128] buffer)
{
enum type = MsgpackType.uint8;
formatType!(type)(42, buffer[0..DataSize!type]);
}
Because of clever typing there's no runtime bounds checking but all bounds
checks are performed at compile time by type checking.
Serializing a small negative integer into one byte
voidformat(refubyte[128] buffer)
{
enum type = MsgpackType.negFixInt;
formatType!(type)(-11, buffer[0..DataSize!type]);
}
The MessagePack format is cleverly designed, so encoding the type is actually free
in this case.
pure nothrow @nogc @safe void msgpack_ll.format(ref ubyte[128]):mov BYTE PTR [rdi],-11ret
Deserializing an expected type
boolparse(refubyte[128] buffer, refbyte value)
{
enum type = MsgpackType.negFixInt;
auto rtType = getType(buffer[0]);
if(rtType != type)
returnfalse;
value = parseType!type(buffer[0..DataSize!type]);
returntrue;
}
The compiler will inline functions and can see through the switch block in
getType. If you explicitly ask for one type, the compiler will reduce the
code to a simple explicit if check for this type!
boolparse(refubyte[128] buffer, refbyte value)
{
auto rtType = getType(buffer[0]);
switch(rtType)
{
case MsgpackType.negFixInt:
value = parseType!(MsgpackType.negFixInt)(buffer[0..DataSize!(MsgpackType.negFixInt)]);
returntrue;
case MsgpackType.int8:
value = parseType!(MsgpackType.int8)(buffer[0..DataSize!(MsgpackType.int8)]);
returntrue;
default:
returnfalse;
}
}
The generated code is obviously slighly more complex. The interesting part here
is that type checking is directly done using the raw type value and not the
enum values returned by getType. Even manually written ASM probably can't do
much better here.
Automatic Message Pack detection (from the HTTP headers) and encoding of all JSON messages to Message Pack.
Extension of the current ExpressJS API; Introducing the Response.msgPack(jsObject) method on the standard ExpressJS Response object.
Getting Started
With auto-detection and transformation enabled, the middleware detects automatically the HTTP header Accept: application/x-msgpack and piggybacks the Response.json() method of the ExpressJS API, to encode the JSON response as Message Pack. This method is usefull, when you have existing applications that need use the middleware, without changing the codebase very much.
Note: Remember the add the header Accept: application/x-msgpack in the request.
Also it can have auto detection and transformation disabled. The middleware extends the Response object of the ExpressJS framework, by adding the msgPack() method to it. Then to return an encoded response, you just use the Response.msgPack() method that accepts the Javascript object as parameter. For example,
Contributions are welcome 🤘 We encourage developers like you to help us improve the projects we've shared with the community. Please see the Contributing Guide and the Code of Conduct.
This is a high speed msgpack encoder and decoder
for R, based on the CWPack C
implementation.
msgpack is a binary data format with data structures similar to JSON
and a compact binary encoding. It can be a drop-in replacement for
JSON in most applications. It is designed to be fast to parse and
compact to transmit and store.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
This is a Racket implementation of MessagePack, a binary data serialisation
format. It allows you to serialise (pack) and de-serialise (unpack) Racket
object to and from binary data.
Installation
The easiest way to install this library is from the Racket Package Catalog.
Run the following code from your shell:
raco pkg install msgpack
If you wish to install the package from this repository use the included
makefile:
make install # Install the package
make remove # Uninstall the package
Using MessagePack
;;; Import the library first
(require msgpack)
;;; Some object to pack
(define hodgepodge (vector 12 (void) '#(3#t) "foo"))
;;; Packing data
(define packed (call-with-output-bytes (λ (out) (pack hodgepodge out))))
;;; > #"\225\1\2\300\222\3\303\243foo";;; Unpacking data
(define unpacked (call-with-input-bytes packed (λ (in) (unpack in))))
;;; > '#(1 2 #<void> #(3 #t) "foo")
The pack function takes a Racket object and a binary output port as arguments
and writes the serialised data to the port. The unpack function takes a
binary input port and returns one de-serialised object, consuming the necessary
amount of bytes from the port in the process. For more details please refer to
the documentation.
In the above example code we set the output and input ports to be byte strings
so we could work with the packed and unpacked data directly inside the Racket
instance.
Status
The library is fully functional, covered by test cases, and the API should be
reasonably mature, but I am not yet willing to completely rule out changes. See
also below for parts of the library that could not be tested at the moment due
to technical reasons.
Caveats
The following cases cannot be tested for the time being:
The bin32 type, storing a byte string that is 2^32 bytes long
requires 4GiB, my machine simply runs out of memory.
The same goes for the str32 type
The same goes for the array32 type
The same goes for the map32 type
The same goes for the ext32 type
Strings are only tested using ASCII characters, if anyone can generate
UTF-8 strings with a given length in bytes please help out.
License
Released under the GPLv3+ license, see the COPYING file for details.
Convert to and from msgpack objects in R using the official msgpack-c API through Rcpp.
A flowchart describing the conversion of R objects into msgpack objects and back.
Msgpack EXT types are converted to raw vectors with EXT attributes containing the extension type. The extension type must be an integer from 0 to 127.
Maps are converted to data.frames with additional class "map". Map objects in R contain key and value list columns and can be simplified to named lists or named vectors. The helper function msgpack_map creates map objects that can be serialized into msgpack.
An Objective-C wrapper for msgpack-c. Focuses on ease of use and speed.
If you need configurability, there are other, more advanced libraries, for example MPMessagePack.
This library will always try to use sane defaults. If any nil value is encountered in the MessagePack-data, the object will
be omitted instead of returning an [NSNull null]. This means that there can be no nil objects in dictionaries, and object-less
keys will be lost in translation.
The library supports MessagePack timestamps,
and will return an NSDate-object whenever one is encountered. When serializing, any NSDate-objects will also be
serialized as native MessagePack timestamps.
You can add native serialization for your own classes by conforming to protocol MessagePackSerializable and register it like this: