Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
mattiasflodin committed Apr 2, 2020
1 parent 2969862 commit 1e55e25
Show file tree
Hide file tree
Showing 12 changed files with 352 additions and 209 deletions.
76 changes: 45 additions & 31 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,13 @@ space.

How it works
============
By low latency I mean that the time from calling the library and returning
to the caller is as short as I could make it. The code generated at the
call site consists of

1. Pushing the arguments on a thread-local queue. This has the same cost
as pushing the arguments on the stack for a normal function call.
2. Call to Boost.Lockless (NB: this is bundled with the library, not an
external dependency) to register the write request on a shared
lockless queue.

The actual message formatting and writing is performed asynchronously by a
separate thread. This removes or hides several costs:
By low latency I mean that the time from calling the library and
returning to the caller is as short as I could make it. The code
generated at the call site consists only of pushing the arguments on a
shared, lockless queue. In the non-contended case this has roughly the
same cost as a making a function call. The actual message formatting
and writing is performed asynchronously by a separate thread. This
removes or hides several costs:

* No transition to the kernel at the call site. The kernel is an easily
overlooked but important cost, not only because the transition costs
Expand All @@ -34,6 +29,8 @@ separate thread. This removes or hides several costs:
* No locks need to be taken for synchronization between threads (unless
the queue fills up; see the performance article for more information
about the implications of this).
* No text formatting needs to be performed before resuming the main
task of the program.
* It doesn't have to wait for the actual I/O operation to complete.
* If there are bursts of log calls, multiple items on the queue can be
batched into a single I/O operation, improving throughput without sacrificing
Expand All @@ -57,9 +54,10 @@ thread, there are a few caveats you need to be aware of:
asynchronous logging—for example fprintf will buffer data until you
flush it—but asynchronous logging arguably makes the issue worse. The
library provides convenience functions to aid with this.
* As all string formatting is done in a single thread, it could theoretically
limit the scalability of your application if the formatting is very
expensive.
* As all string formatting is done in a single thread, it could
theoretically limit the scalability of your application if
formatting is expensive or your program generates a high volume of
log entries in parallel.
* Performance becomes somewhat less predictable and harder to measure. Rather
than putting the cost of the logging on the thread that calls the logging
library, the OS may suspend some other thread to make room for the logging
Expand Down Expand Up @@ -106,36 +104,48 @@ int main()
```
This would give the following output:
```
D 2015-03-29 13:23:35.288 Pointer: 0x1e18218
I 2015-03-29 13:23:35.288 Info line: Hello World!
W 2015-03-29 13:23:35.288 Warning: 0
W 2015-03-29 13:23:35.288 Warning: 1
W 2015-03-29 13:23:35.288 Warning: 2
W 2015-03-29 13:23:35.288 Warning: 3
E 2015-03-29 13:23:35.288 Error: 3.140000
D 2019-03-09 12:53:54.280 Pointer: 0x7fff58378850
I 2019-03-09 12:53:54.280 Info line: Hello World!
W 2019-03-09 12:53:54.280 Warning: 0
W 2019-03-09 12:53:54.280 Warning: 1
W 2019-03-09 12:53:54.280 Warning: 2
W 2019-03-09 12:53:54.280 Warning: 3
E 2019-03-09 12:53:54.280 Error: 3.140000
```
Platforms
=========
The library currently works only on Linux. Windows and BSD are on the roadmap.
I don't own any Apple computers, so OS X won't happen unless someone sends me
The library works on Windows and Linux. BSD is on the roadmap. I don't
own any Apple computers, so OS X won't happen unless someone sends me
a patch or buys me hardware.
Building
========
Alternative 1: using Make
Alternative 1: Using Visual Studio
----------------------------------
On Windows it is recommended to use Visual Studio for building the library.
Simply open reckless.sln, choose "batch build" and select all configurations.
The library files will be placed in the `build` subdirectory.
To build a program against the library you need to set your library path to
point to the appropriate library build for your configuration, and set the
include path to `$(RECKLESS)/reckless/include`, where `RECKLESS`, given that
RECKLESS is a property that points to the reckless source directory.
Alternative 2: using Make
-------------------------
To build the library using GNU Make, clone the git repository and run make.
This only works with GCC-compatible compilers.
To build a program against the library, given the variable RECKLESS
pointing to the reckless root directory, use:
pointing to the reckless root directory, use e.g.:
```bash
g++ -std=c++11 myprogram.cpp -I$(RECKLESS)/boost -I$(RECKLESS)/reckless/include -L$(RECKLESS)/reckless/lib -lreckless -lpthread
g++ -std=c++11 myprogram.cpp -I$(RECKLESS)/reckless/include -L$(RECKLESS)/reckless/lib -lreckless -lpthread
```

Alternative 2: using CMake
Alternative 3: using CMake
--------------------------
To build the library using CMake, clone the git repository and run the following commands:

Expand Down Expand Up @@ -163,6 +173,10 @@ For more details, see the [manual](doc/manual.md).

Alternatives
============
Two other logging libraries with a similar, asynchronous design are
[spdlog](https://github.com/gabime/spdlog/) and
[g2log](https://bitbucket.org/KjellKod/g2log/).
Other logging libraries with similar, asynchronous design are
* [spdlog](https://github.com/gabime/spdlog/)
* [g3log](https://github.com/KjellKod/g3log/)
* [NanoLog](https://github.com/Iyengar111/NanoLog) (there is [another
NanoLog](https://github.com/PlatformLab/NanoLog) which deviates in design
since it logs binary data and requires postprocessing to read the log file)
* [mini-async-log](https://github.com/RafaGago/mini-async-log)
Binary file modified doc/images/performance_call_burst_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/images/performance_call_burst_1_ma.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/images/performance_mandelbrot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/images/performance_mandelbrot_difference.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/images/performance_periodic_calls_all.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/images/performance_write_files.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 1e55e25

Please sign in to comment.