Bongani  Ngema

Bongani Ngema


Adrenaline: An AI-powered Debugger


Adrenaline is a debugger powered by the OpenAI Codex. It not only fixes your code, but teaches you along the way.



Adrenaline can be used here. Simply plug in your broken code and an error message (e.g. a stack trace or natural language description of the error) and click "Debug."

Note that you will have to supply your own OpenAI API key. This is to prevent API misuse.

Running Locally

To run locally, clone the repository and run the following:

$ npm install
$ npm run start-local



Adrenaline sends your code and error message to the OpenAI Edit & Insert API (code-davinci-edit-001), which returns code changes that might fix your error (or at least give you a starting point). The proposed fixes are displayed in-line like a diff, with the option to accept, reject, or modify each code change.

Error Explanation

Not only does Adrenaline propose fixes for your errors, but it also explains errors in plain English using GPT-3 (text-davinci-003).


If your code isn't throwing an exception, it may still contain bugs. Adrenaline can scan your code for potential issues and propose fixes for them, if any exist.


Right now, Adrenaline is just a simple wrapper around GPT-3, meant to demonstrate what's possible with AI-driven debugging. There's many ways it can be improved:

  1. Client-side intelligence (e.g. static code analysis) could be used to build a better prompt for GPT-3.
  2. Instead of simply explaining your error, Adrenaline should provide chain-of-thought reasoning for how it fixed the error.
  3. In addition to chain-of-thought reasoning, Adrenaline could provide a ChatGPT-style assistant to answer questions about your error. I can even see Adrenaline being repurposed as a "coding tutor" for beginners.
  4. Creating a VSCode extension that does this would eliminate the friction of copy-pasting your code and error message into the site.

Ultimately, while the OpenAI Codex is surprisingly good at debugging code, I believe a more specialized model trained on all publicly available code could yield better results. There are interesting research questions here, such as how to generate synthetic training data (i.e. how can you systematically break code in a random but non-trivial way?).


Malik Drabla for helping build the initial PoC during AI Hack Week. Ramsey Lehman for design feedback. Paul Bogdan, Michael Usachenko, and Samarth Makhija for various other feedback.

Download Details:

Author: Shobrook
Source Code: 
License: MIT license

#javascript #AI #linter #codex #debugging #tool 

Adrenaline: An AI-powered Debugger

Simple and Easy to Use Catch-all SMTP Mail Server and Debugging Tool


Catch-all SMTP server for local debugging purposes.

This SMTP server catches all e-mail being sent through it and provides an interface to inspect the e-mails.

Note: this SMTP server is meant to be run locally. As such several security considerations (e.g. SMTP transaction delays) have been omitted by design. Never run this project as a public service.

Screenshot MailGrab

Project status

This project is currently working towards a first stable release version.
The master branch of this project will always be in a functioning state and will always point to the last release.

All active development should be based off the v0.4.0 branch.

Current limitations

  • Currently the project only supports unauthenticated smtp requests (without AUTH command)
  • No persistent storage
  • Because we currently only support in-memory storage the project may run out of memory when handling a lot of mails or mails with a lot attachments


  • PHP 7.1



composer create-project peehaa/mailgrab


Download the latest phar file from the releases page.



./bin/mailgrab will start MailGrab using the default configuration:

  • HTTP port: 9000
  • SMTP port: 9025

See ./bin/mailgrab --help for more configuration options

Once the MailGrab server is started you can point your browser to http://localhost:9000 to access the webinterface.
If you send a mail to the server over port 9025 it will automatically be displayed in the webinterface.
There are example mail scripts available under ./examples (e.g. php examples/full-test.php) which you can run to test the functionality.


/path/to/mailgrab.phar will start MailGrab using the default configuration:

  • HTTP port: 9000
  • SMTP port: 9025

See /path/to/mailgrab.phar --help for more configuration options

Build and development


To get started run npm install.

An NPM build script is provided and can be used by running npm run build in the project root.

Building phars

Currently all active development has to be based off the v0.4.0 branch.

If you want to build a phar you can run the build script located at ./bin/build which will create a new build in the ./build directory.

Download Details:

Author: PeeHaa
Source Code: 
License: MIT license

#php #debugging #mail #async 

Simple and Easy to Use Catch-all SMTP Mail Server and Debugging Tool
Oral  Brekke

Oral Brekke


Step by Step Guide: Debugging Application in Kubernetes

What are Kubernetes Containers?

In order to debug application running in it, first, it is necessary to understand it and Docker containers. So, let’s start with a quick introduction:

  • It is nothing but a container orchestration platform; it is an open-source platform for application scaling, management, and deployment. Automation application deployment, scaling, and power is the aim of it.
  • Google developed it in 2014, and they have contributed now cloud-native computing foundation, which is managed by it currently. The reason everyone wants to use it is that it is more flexible.
  • If you have more and more servers on that time, we have to challenge ourselves to manage all containers and how we get an idea about that and how we know which application is running on which server for reducing this complication we used it.

What are Docker Container?

  • It is just a platform to build, ship and run its containers. It is a container orchestration open-source platform for Docker containers that is larger than Docker Swarm.
  • Microservices connect with both. And both are used as the open-source platform. It is a tool designed to make it easier to create, deploy and run the application by using containers so that one can debug application running in it easily.

What are its key features?

The key features of Kubernetes are listed below:

Horizontal Scaling

It can scale horizontally and lets us deploy a pod and different containers. Your containers can be scaled when you have it configured automatically for debugging applications in it.

Self Healing

It has self-healing capabilities; also, it is the best feature for it. It automatically restarts the container.

Automated Scheduling

Automated Scheduling is a feature of Managed Kubernetes. Its scheduler is a critical part of the platform. Matchmaking a pod with a node Scheduler is responsible for that.Features of Kubernetes

Load Balancing

Load distribution is load balancing; at the dispatch level, it is easy to implement.

Rollback and Rollout Automatically

It is also a useful feature for rollout and rollbacks any change according to the requirement. If something goes wrong, it will be helpful for rollback and for any change update it is useful for rollout also after performing debug application running in its process.

Storage Orchestration

It is also the best feature that we can mount the storage system according to our wish. Many more features are available in it that are mentioned in the above diagram.

What are its components and architecture?

It used a client-server architecture in diagram master play role as server and node play role as a client. It is a possible multi-master or server setup by which there is only a single master server that plays the role of controlling the client/node. The server and client consist of various components. That describes the following:Debug in Kubernetes Architecture

Master/Server Component

A primary and vital component of the master node is the following:

Kubernetes Scheduler

It iss a critical part of the platform. Matchmaking a pod with a node, Scheduler is responsible for that. It schedules on the best fit node after reading the requirements of the service.


For managing controller processes with dependencies on the underlying cloud provider, the cloud controller manager is responsible. For example, when a controller requires to check volume in the cloud infrastructure or load balancer, these are handled by the cloud controller manager. Some time needs to check if a node was terminated or set up routes, then this is also governed by them.

Controller Manager

Cloud control manager and Kube controller manager both are different from each other there also working differently while debugging applications in Kubernetes architecture.

etcd Cluster

It helps us to aggregate our available resources; basically, it is a collection of servers/hosts. It is only accessible for security reasons and from the API server. It stores the configuration details.

Client/Node Component

The vital components of the Client/Node node is the following:


A pod is a collection of containers. Containers can not run directly by it. Any container will share the same resources and the local network in the same pod; the container can easily communicate with each other container in a pod.


It is responsible for maintaining all pods in which contain the set of containers. It works to insure that pods and their containers are running in the right state, and all are healthy.

Debugging and Developing services locally in Kubernetes

Debugging on Kubernetes consists of different services. Each service is running in its container. Developing and to debug applications running in its clusters can be large and heavy, for this requires us to have a shell on a running container after then all your tools running in the remote body. Telepresence is a tool for debugging applications in it locally without any difficulty. Telepresence allows us to use custom tools like IDE and debugger. This document describes the telepresence used for debugging and developing services that are running on a cluster locally. The debugging and developing services need to install its cluster telepresence and must also be installed.

How to Develop and Debug existing services?

We make the program or debug a single service when developing an application on it. These services required other services to debug application running in it and testing. With the telepresence, Kube proxy uses the –swap-deployment option to swap an existing deployment. Swapping allows us to connect to the remote cluster and will enable us to run a service locally by debugging applications in it.

What are its benefits and limitations?

The below are the highlighted benefits and limitations:

Benefits of Debugging in Kubernetes

It is the best advantage that now developers can use other Kubernetes Security tools for debugging on it like in the Armador repo, use the telepresence tool as well as using Ksync & Squash to debug the application.Debug Kubernetes Benefits

Limitations of Debugging in Kubernetes

  • It is a standard part of the process and development lifecycle that Every developer debug locally. But when it comes to it, then this approach becomes more difficult.
  • It has its orchestration mechanism and optimization methodologies when Developers can debug microservices hosted by cloud providers. But that methodology to debug application running in it made great. But it makes debugging applications in Kubernetes more difficult.


Kubernetes deployment is a concept of pods; pods are nothing but nodes which are nothing but servers where different content can be deployed in a pod. You can have a single container or multiple containers. Pods contain more containers. It can group containers that make up an application into logical units for easy management and discovery by this it identify how many nodes are there.

Original article source at:

#kubernetes #debugging #application 

Step by Step Guide: Debugging Application in Kubernetes

How to Debugging A Containerized Django App in PyCharm

Developing your Django app in Docker can be very convenient. You don't have to install extra services like Postgres, Nginx, and Redis, etc. on you own machine. It also makes it much easier for a new developer to quickly get up and running.

The grass is not always greener, though. Running Django in Docker can create some problems and make what was once easy difficult. For example, how do you set breakpoints in your code and debug?

In this quick tutorial, we'll look at how PyCharm comes to the rescue with its remote interpreter and Docker integration to make it easy to debug a containerized Django app.

This post uses PyCharm Professional Edition v2021.2.2. For the differences between the Professional and Community (free) Editions of PyCharm, take a look at the Professional vs. Community - Compare Editions guide.


By the end of this tutorial, you should be able to do the following in PyCharm:

  1. Configure Docker settings
  2. Set up a Remote Interpreter
  3. Create a Run/Debug configuration to debug a Django app running inside of Docker

Docker Settings in PyCharm

The first step we need to do is to tell PyCharm how to connect to Docker. To do so, open PyCharm settings (PyCharm > Preferences for Mac users or File > Settings for Windows and Linux users), and then expand the "Build, Execution, Deployment" setting. Click "Docker" and then click the "+" button to create a new Docker configuration.

PyCharm preferences

For Mac, select the Docker for Mac options. Then apply the changes.

Configure a Remote Interpreter

Now that we have the Docker configuration set up, it's time to configure Docker Compose as a remote interpreter. Assuming you have a project open, open the settings once again and expand the "Project: <your-project-name>" setting and click "Python Interpreter". Click the gear icon and choose "Add".

PyCharm preferences interpreter

In the next dialog, choose "Docker Compose" in the left pane, and select the Docker configuration you created in the previous steps in the "Server" field. The "Configuration file(s)" field should point to your Docker Compose file while the "Service" field should point to the web application service from your Docker Compose file.

For example, if your Docker Compose file looks like this, then you'll want to point to the web service:

version: '3.7'

    build: ./app
    command: python runserver
      - ./app/:/usr/src/app/
      - 8008:8000
      - ./
      - db
    image: postgres:12.0-alpine
      - postgres_data:/var/lib/postgresql/data/
      - POSTGRES_USER=hello_django
      - POSTGRES_PASSWORD=hello_django
      - POSTGRES_DB=hello_django_dev


The debugger attaches specifically to the web service. All other services in your Docker Compose file will start when we later run the configuration in PyCharm

Add interpreter

Click "OK" to apply the changes.

Back in the "Python Interpreter" setting dialog you should now see that the project has the correct remote interpreter.

Remote interpreter

Close the settings.

Create a Run/Debug Configuration

Now that we've configured PyCharm to be able to connect to Docker and created a remote interpreter configuration based on the Docker Compose file, we can create a Run/Debug configuration.

Click on the "Add configuration..." button at the top of the PyCharm window.

Add run configuration

Next click the "+" button and choose "Django server".

Add run configuration step 2

Give the configuration a name. The important thing in this configuration dialog is to set the "Host" field to

Add run configuration step 3

Click "OK" to save the configuration. We can now see the Run/Debug configuration at the top of the PyCharm window and that the buttons (for run, debug, etc.) are enabled.

The finished run configuration

If you now set breakpoints in your Django app and press the debug button next to the Run/Debug configuration, you can debug the Django app running inside the Docker container.

Debugging in the Docker container


In this tutorial, we've shown you how to configure PyCharm for debugging a Django app running inside of Docker. With that, you can now not only debug your views and models and what not, but also set breakpoints and debug your template code.

TIP: Want to supercharge your debugging even more? PyCharm also lets you set conditional breakpoints!

Original article source at:

#django #pycharm #debugging 

How to Debugging A Containerized Django App in PyCharm
Lawrence  Lesch

Lawrence Lesch


Hardhat: A Development Environment to Compile, Deploy, Test

Hardhat is an Ethereum development environment for professionals. It facilitates performing frequent tasks, such as running tests, automatically checking code for mistakes or interacting with a smart contract. Check out the plugin list to use it with your existing tools.


To install Hardhat, go to an empty folder, initialize an npm project (i.e. npm init), and run

npm install --save-dev hardhat

Once it's installed, just run this command and follow its instructions:

npx hardhat


On Hardhat's website you will find:


Contributions are always welcome! Feel free to open any issue or send a pull request.

Go to to learn about how to set up Hardhat's development environment.

Feedback, help and news

Hardhat Support Discord server: for questions and feedback.

Follow Hardhat on Twitter.

Happy building!


Built by the Nomic Foundation for the Ethereum community.

Join our Hardhat Support Discord server to stay up to date on new releases, plugins and tutorials.

Download Details:

Author: NomicFoundation
Source Code: 
License: View license

#typescript #javascript #debugging #tooling #ethereum 

Hardhat: A Development Environment to Compile, Deploy, Test
Rupert  Beatty

Rupert Beatty


Watchdog: Class for Logging Excessive Blocking on The Main Thread


Class for logging excessive blocking on the main thread. It watches the main thread and checks if it doesn’t get blocked for more than defined threshold.

👮 Main thread was blocked for 1.25s 👮

You can also inspect which part of your code is blocking the main thread.


Simply, just instantiate Watchdog with number of seconds that must pass to consider the main thread blocked. Additionally you can enable strictMode that stops the execution whenever the threshold is reached. This way, you can inspect which part of your code is blocking the main thread.

let watchdog = Watchdog(threshold: 0.4, strictMode: true)

Don't forget to retain Watchdog somewhere or it will get released when it goes out of scope.


  • iOS 8.0+, tvOS 9.0+ or macOS 10.9+
  • Swift 5.0



Add the following to your Cartfile:

github "wojteklu/Watchdog"

Then run carthage update.

Follow the current instructions in Carthage's README for up to date installation instructions.


Add the following to your Podfile:

pod 'Watchdog'

You will also need to make sure you're opting into using frameworks:



Manually add the file into your Xcode project. Slightly simpler, but updates are also manual.

Download Details:

Author: Wojteklu
Source Code: 
License: MIT license

#swift #macos #debugging 

Watchdog: Class for Logging Excessive Blocking on The Main Thread
Rupert  Beatty

Rupert Beatty


ResponseDetective: Sherlock Holmes Of The Networking Layer

ResponseDetective is a non-intrusive framework for intercepting any outgoing requests and incoming responses between your app and your server for debugging purposes.


ResponseDetective is written in Swift 5.3 and supports iOS 9.0+, macOS 10.10+ and tvOS 9.0+.


Incorporating ResponseDetective in your project is very simple – it all comes down to just two steps:

Step 1: Enable interception

For ResponseDetective to work, it needs to be added as a middleman between your (NS)URLSession and the Internet. You can do this by registering the provided URLProtocol class in your session's (NS)URLSessionConfiguration.protocolClasses, or use a shortcut method:

// Objective-C

NSURLSessionConfiguration *configuration = [NSURLSessionConfiguration defaultSessionConfiguration];
[RDTResponseDetective enableInConfiguration:configuration];
// Swift

let configuration = URLSessionConfiguration.default
ResponseDetective.enable(inConfiguration: configuration)

Then, you should use that configuration with your (NS)URLSession:

// Objective-C

NSURLSession *session = [[NSURLSession alloc] initWithConfiguration:configuration];
// Swift

let session = URLSession(configuration: configuration)

Or, if you're using AFNetworking/Alamofire as your networking framework, integrating ResponseDetective comes down to just initializing your AFURLSessionManager/Manager with the above (NS)URLSessionConfiguration:

// Objective-C (AFNetworking)

AFURLSessionManager *manager = [[AFURLSessionManager alloc] initWithSessionConfiguration:configuration];
// Swift (Alamofire)

let manager = Alamofire.SessionManager(configuration: configuration)

And that's all!

Step 2: Profit

Now it's time to perform the actual request:

// Objective-C

NSURLRequest *request = [[NSURLRequest alloc] initWithURL:[NSURL URLWithString:@""]];
[[session dataTaskWithRequest:request] resume];
// Swift

let request = URLRequest(URL: URL(string: "")!)
session.dataTask(with: request).resume()

Voilà! 🎉 Check out your console output:

<0x000000000badf00d> [REQUEST] GET
 ├─ Headers
 ├─ Body
 │ <none>

<0x000000000badf00d> [RESPONSE] 200 (NO ERROR)
 ├─ Headers
 │ Server: nginx
 │ Date: Thu, 01 Jan 1970 00:00:00 GMT
 │ Content-Type: application/json
 ├─ Body
 │ {
 │   "args" : {
 │   },
 │   "headers" : {
 │     "User-Agent" : "ResponseDetective\/1 CFNetwork\/758.3.15 Darwin\/15.4.0",
 │     "Accept-Encoding" : "gzip, deflate",
 │     "Host" : "",
 │     "Accept-Language" : "en-us",
 │     "Accept" : "*\/*"
 │   },
 │   "url" : "https:\/\/\/get"
 │ }



If you're using Carthage, add the following dependency to your Cartfile:

github "netguru/ResponseDetective" ~> {version}


If you're using CocoaPods, add the following dependency to your Podfile:

pod 'ResponseDetective', '~> {version}'


To install the test dependencies or to build ResponseDetective itself, do not run carthage directly. It can't handle the Apple Silicon architectures introduced in Xcode 12. Instead, run it through the script:

$ ./ bootstrap


This project was made with ♡ by Netguru.

Release names

Starting from version 1.0.0, ResponseDetective's releases are named after Sherlock Holmes canon stories, in chronological order. What happens if we reach 60 releases and there are no more stories? We don't know, maybe we'll start naming them after cats or something.

Download Details:

Author: Netguru
Source Code: 
License: MIT license

#swift #macos #debugging #ios 

ResponseDetective: Sherlock Holmes Of The Networking Layer
Toby Rogers

Toby Rogers


How to Debug C++ Code

Debugging in C++ 

In this tutorial, I am going to show you how to debug C++ code, starting from the very basics and then demonstrating how a debugger like GDB can be used to help you track errors in CPU code. 

I always tell my students, the debugger is your 'get out of jail free card' when working on a project. I say the same thing to professionals, debuggers are your 'get out of free jail card'. The reality is that programmers spend the majority of their time debugging as opposed to writing new code. Unfortunately many programmers do not learn how to use a debugger, or otherwise how they should approach debugging. In this talk I am going to show you how to debug C++ code, starting from the very basics and then demonstrating how a debugger like GDB can be used to help you track errors in CPU code. Attendees at this talk will learn names of debugging techniques (e.g. delta debugging), and I will demonstrate several debugging tools (stepping through code, capturing backtraces, conditional breakpoints, scripting, and even time traveling!) to demonstrate the power of debuggers. This is a beginner friendly talk where we are going to start from the beginning, but I suspect I may show a trick or two that folks with prior experience will appreciate.

#cplusplus #cpp #programming #debugging

How to Debug C++ Code

Webgrind: Xdebug Profiling Web Frontend in PHP


Webgrind is an Xdebug profiling web frontend in PHP. It implements a subset of the features of kcachegrind and installs in seconds and works on all platforms. For quick'n'dirty optimizations it does the job. Here's a screenshot showing the output from profiling:


  • Super simple, cross platform installation - obviously :)
  • Track time spent in functions by self cost or inclusive cost. Inclusive cost is time inside function + calls to other functions.
  • See if time is spent in internal or user functions.
  • See where any function was called from and which functions it calls.
  • Generate a call graph using

Suggestions for improvements and new features are more than welcome - this is just a start.


  1. Download webgrind
  2. Unzip package to favourite path accessible by webserver.
  3. Load webgrind in browser and start profiling

Alternatively, on PHP 5.4+ run the application using the PHP built-in server with the command composer serve or php -S index.php if you are not using Composer.

For faster preprocessing, give write access to the bin subdirectory, or compile manually:

  • Linux / Mac OS X: execute make in the unzipped folder (requires GCC or Clang.)
  • Windows: execute nmake -f NMakeFile in the unzipped folder (requires Visual Studio 2015 or higher.)

See the Installation Wiki page for more.

Use with Docker

Instead of uploading webgrind to a web server or starting a local one, you can use the official Docker image to quickly inspect existing xDebug profiling files. To use the Docker image, run the following command with /path/to/xdebug/files replaced by the actual path of your profiling files.

docker run --rm -v /path/to/xdebug/files:/tmp -p 80:80 jokkedk/webgrind:latest

Now open http://localhost in your browser. After using webgrind you can stop the Docker container by pressing CTRL / Strg + C.

To use the built-in file viewer, mount the appropriate files under /host in the container.


Webgrind is written by Joakim Nygård and Jacob Oettinger. It would not have been possible without the great tool that Xdebug is thanks to Derick Rethans.

Current maintainer is Micah Ng.

Download Details:

Author: jokkedk
Source Code: 
License: View license

#php #xdebug #debugging #tool 

Webgrind: Xdebug Profiling Web Frontend in PHP
Rupert  Beatty

Rupert Beatty


XCGLogger: A Debug Log Framework for Use in Swift Projects


XCGLogger is the original debug log module for use in Swift projects.

Swift does not include a C preprocessor so developers are unable to use the debug log #define macros they would use in Objective-C. This means our traditional way of generating nice debug logs no longer works. Resorting to just plain old print calls means you lose a lot of helpful information, or requires you to type a lot more code.

XCGLogger allows you to log details to the console (and optionally a file, or other custom destinations), just like you would have with NSLog() or print(), but with additional information, such as the date, function name, filename and line number.

Go from this:

Simple message

to this:

2014-06-09 06:44:43.600 [Debug] [AppDelegate.swift:40] application(_:didFinishLaunchingWithOptions:): Simple message



Communication (Hat Tip AlamoFire)

  • If you need help, use Stack Overflow (Tag 'xcglogger').
  • If you'd like to ask a general question, use Stack Overflow.
  • If you've found a bug, open an issue.
  • If you have a feature request, open an issue.
  • If you want to contribute, submit a pull request.
  • If you use XCGLogger, please Star the project on GitHub


Git Submodule


git submodule add

in your repository folder.


Add the following line to your Cartfile.

github "DaveWoodCom/XCGLogger" ~> 7.0.1

Then run carthage update --no-use-binaries or just carthage update. For details of the installation and usage of Carthage, visit its project page.

Developers running 5.0 and above in Swift will need to add $(SRCROOT)/Carthage/Build/iOS/ObjcExceptionBridging.framework to their Input Files in the Copy Carthage Frameworks Build Phase.


Add something similar to the following lines to your Podfile. You may need to adjust based on your platform, version/branch etc.

source ''
platform :ios, '8.0'

pod 'XCGLogger', '~> 7.0.1'

Specifying the pod XCGLogger on its own will include the core framework. We're starting to add subspecs to allow you to include optional components as well:

pod 'XCGLogger/UserInfoHelpers', '~> 7.0.1': Include some experimental code to help deal with using UserInfo dictionaries to tag log messages.

Then run pod install. For details of the installation and usage of CocoaPods, visit its official web site.

Note: Before CocoaPods 1.4.0 it was not possible to use multiple pods with a mixture of Swift versions. You may need to ensure each pod is configured for the correct Swift version (check the targets in the pod project of your workspace). If you manually adjust the Swift version for a project, it'll reset the next time you run pod install. You can add a post_install hook into your podfile to automate setting the correct Swift versions. This is largely untested, and I'm not sure it's a good solution, but it seems to work:

post_install do |installer|
    installer.pods_project.targets.each do |target|
        if ['SomeTarget-iOS', 'SomeTarget-watchOS'].include? "#{target}"
            print "Setting #{target}'s SWIFT_VERSION to 4.2\n"
            target.build_configurations.each do |config|
                config.build_settings['SWIFT_VERSION'] = '4.2'
            print "Setting #{target}'s SWIFT_VERSION to Undefined (Xcode will automatically resolve)\n"
            target.build_configurations.each do |config|

    print "Setting the default SWIFT_VERSION to 3.2\n"
    installer.pods_project.build_configurations.each do |config|
        config.build_settings['SWIFT_VERSION'] = '3.2'

You can adjust that to suit your needs of course.

Swift Package Manager

Add the following entry to your package's dependencies:

.Package(url: "", majorVersion: 7)

Backwards Compatibility


  • XCGLogger version 7.0.1 for Swift 5.0
  • XCGLogger version 6.1.0 for Swift 4.2
  • XCGLogger version 6.0.4 for Swift 4.1
  • XCGLogger version 6.0.2 for Swift 4.0
  • XCGLogger version 5.0.5 for Swift 3.0-3.2
  • XCGLogger version 3.6.0 for Swift 2.3
  • XCGLogger version 3.5.3 for Swift 2.2
  • XCGLogger version 3.2 for Swift 2.0-2.1
  • XCGLogger version 2.x for Swift 1.2
  • XCGLogger version 1.x for Swift 1.1 and below.

Basic Usage (Quick Start)

This quick start method is intended just to get you up and running with the logger. You should however use the advanced usage below to get the most out of this library.

Add the XCGLogger project as a subproject to your project, and add the appropriate library as a dependency of your target(s). Under the General tab of your target, add XCGLogger.framework and ObjcExceptionBridging.framework to the Embedded Binaries section.

Then, in each source file:

import XCGLogger

In your AppDelegate (or other global file), declare a global constant to the default XCGLogger instance.

let log = XCGLogger.default

In the

application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]? = nil) // iOS, tvOS


applicationDidFinishLaunching(_ notification: Notification) // macOS

function, configure the options you need:

log.setup(level: .debug, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true, writeToFile: "path/to/file", fileLevel: .debug)

The value for writeToFile: can be a String or URL. If the file already exists, it will be cleared before we use it. Omit the parameter or set it to nil to log to the console only. You can optionally set a different log level for the file output using the fileLevel: parameter. Set it to nil or omit it to use the same log level as the console.

Then, whenever you'd like to log something, use one of the convenience methods:

log.verbose("A verbose message, usually useful when working on a specific problem")
log.debug("A debug message")"An info message, probably useful to power users looking in")
log.notice("A notice message")
log.warning("A warning message, may indicate a possible error")
log.error("An error occurred, but it's recoverable, just info about what happened")
log.severe("A severe error occurred, we are likely about to crash now")
log.alert("An alert error occurred, a log destination could be made to email someone")
log.emergency("An emergency error occurred, a log destination could be made to text someone")

The different methods set the log level of the message. XCGLogger will only print messages with a log level that is greater to or equal to its current log level setting. So a logger with a level of .error will only output log messages with a level of .error, .severe, .alert, or .emergency.

Advanced Usage (Recommended)

XCGLogger aims to be simple to use and get you up and running quickly with as few as 2 lines of code above. But it allows for much greater control and flexibility.

A logger can be configured to deliver log messages to a variety of destinations. Using the basic setup above, the logger will output log messages to the standard Xcode debug console, and optionally a file if a path is provided. It's quite likely you'll want to send logs to more interesting places, such as the Apple System Console, a database, third party server, or another application such as NSLogger. This is accomplished by adding the destination to the logger.

Here's an example of configuring the logger to output to the Apple System Log as well as a file.

// Create a logger object with no destinations
let log = XCGLogger(identifier: "advancedLogger", includeDefaultDestinations: false)

// Create a destination for the system console log (via NSLog)
let systemDestination = AppleSystemLogDestination(identifier: "advancedLogger.systemDestination")

// Optionally set some configuration options
systemDestination.outputLevel = .debug
systemDestination.showLogIdentifier = false
systemDestination.showFunctionName = true
systemDestination.showThreadName = true
systemDestination.showLevel = true
systemDestination.showFileName = true
systemDestination.showLineNumber = true
systemDestination.showDate = true

// Add the destination to the logger
log.add(destination: systemDestination)

// Create a file log destination
let fileDestination = FileDestination(writeToFile: "/path/to/file", identifier: "advancedLogger.fileDestination")

// Optionally set some configuration options
fileDestination.outputLevel = .debug
fileDestination.showLogIdentifier = false
fileDestination.showFunctionName = true
fileDestination.showThreadName = true
fileDestination.showLevel = true
fileDestination.showFileName = true
fileDestination.showLineNumber = true
fileDestination.showDate = true

// Process this destination in the background
fileDestination.logQueue = XCGLogger.logQueue

// Add the destination to the logger
log.add(destination: fileDestination)

// Add basic app info, version info etc, to the start of the logs

You can configure each log destination with different options depending on your needs.

Another common usage pattern is to have multiple loggers, perhaps one for UI issues, one for networking, and another for data issues.

Each log destination can have its own log level. As a convenience, you can set the log level on the log object itself and it will pass that level to each destination. Then set the destinations that need to be different.

Note: A destination object can only be added to one logger object, adding it to a second will remove it from the first.

Initialization Using A Closure

Alternatively you can use a closure to initialize your global variable, so that all initialization is done in one place

let log: XCGLogger = {
    let log = XCGLogger(identifier: "advancedLogger", includeDefaultDestinations: false)

    // Customize as needed
    return log

Note: This creates the log object lazily, which means it's not created until it's actually needed. This delays the initial output of the app information details. Because of this, I recommend forcing the log object to be created at app launch by adding the line let _ = log at the top of your didFinishLaunching method if you don't already log something on app launch.

Log Anything

You can log strings:

log.debug("Hi there!")

or pretty much anything you want:

log.debug(CGPoint(x: 1.1, y: 2.2))
log.debug((4, 2))
log.debug(["Device": "iPhone", "Version": 7])

Filtering Log Messages

New to XCGLogger 4, you can now create filters to apply to your logger (or to specific destinations). Create and configure your filters (examples below), and then add them to the logger or destination objects by setting the optional filters property to an array containing the filters. Filters are applied in the order they exist in the array. During processing, each filter is asked if the log message should be excluded from the log. If any filter excludes the log message, it's excluded. Filters have no way to reverse the exclusion of another filter.

If a destination's filters property is nil, the log's filters property is used instead. To have one destination log everything, while having all other destinations filter something, add the filters to the log object and set the one destination's filters property to an empty array [].

Note: Unlike destinations, you can add the same filter object to multiple loggers and/or multiple destinations.

Filter by Filename

To exclude all log messages from a specific file, create an exclusion filter like so:

log.filters = [FileNameFilter(excludeFrom: ["AppDelegate.swift"], excludePathWhenMatching: true)]

excludeFrom: takes an Array<String> or Set<String> so you can specify multiple files at the same time.

excludePathWhenMatching: defaults to true so you can omit it unless you want to match path's as well.

To include log messages only for a specific set to files, create the filter using the includeFrom: initializer. It's also possible to just toggle the inverse property to flip the exclusion filter to an inclusion filter.

Filter by Tag

In order to filter log messages by tag, you must of course be able to set a tag on the log messages. Each log message can now have additional, user defined data attached to them, to be used by filters (and/or formatters etc). This is handled with a userInfo: Dictionary<String, Any> object. The dictionary key should be a namespaced string to avoid collisions with future additions. Official keys will begin with com.cerebralgardens.xcglogger. The tag key can be accessed by XCGLogger.Constants.userInfoKeyTags. You definitely don't want to be typing that, so feel free to create a global shortcut: let tags = XCGLogger.Constants.userInfoKeyTags. Now you can easily tag your logs:

let sensitiveTag = "Sensitive"
log.debug("A tagged log message", userInfo: [tags: sensitiveTag])

The value for tags can be an Array<String>, Set<String>, or just a String, depending on your needs. They'll all work the same way when filtered.

Depending on your workflow and usage, you'll probably create faster methods to set up the userInfo dictionary. See below for other possible shortcuts.

Now that you have your logs tagged, you can filter easily:

log.filters = [TagFilter(excludeFrom: [sensitiveTag])]

Just like the FileNameFilter, you can use includeFrom: or toggle inverse to include only log messages that have the specified tags.

Filter by Developer

Filtering by developer is exactly like filtering by tag, only using the userInfo key of XCGLogger.Constants.userInfoKeyDevs. In fact, both filters are subclasses of the UserInfoFilter class that you can use to create additional filters. See Extending XCGLogger below.

Mixing and Matching

In large projects with multiple developers, you'll probably want to start tagging log messages, as well as indicate the developer that added the message.

While extremely flexible, the userInfo dictionary can be a little cumbersome to use. There are a few possible methods you can use to simply things. I'm still testing these out myself so they're not officially part of the library yet (I'd love feedback or other suggestions).

I have created some experimental code to help create the UserInfo dictionaries. (Include the optional UserInfoHelpers subspec if using CocoaPods). Check the iOS Demo app to see it in use.

There are two structs that conform to the UserInfoTaggingProtocol protocol. Tag and Dev.

You can create an extension on each of these that suit your project. For example:

extension Tag {
    static let sensitive = Tag("sensitive")
    static let ui = Tag("ui")
    static let data = Tag("data")

extension Dev {
    static let dave = Dev("dave")
    static let sabby = Dev("sabby")

Along with these types, there's an overloaded operator | that can be used to merge them together into a dictionary compatible with the UserInfo: parameter of the logging calls.

Then you can log messages like this:

log.debug("A tagged log message", userInfo: Dev.dave | Tag.sensitive)

There are some current issues I see with these UserInfoHelpers, which is why I've made it optional/experimental for now. I'd love to hear comments/suggestions for improvements.

  1. The overloaded operator | merges dictionaries so long as there are no Sets. If one of the dictionaries contains a Set, it'll use one of them, without merging them. Preferring the left hand side if both sides have a set for the same key.
  2. Since the userInfo: parameter needs a dictionary, you can't pass in a single Dev or Tag object. You need to use at least two with the | operator to have it automatically convert to a compatible dictionary. If you only want one Tag for example, you must access the .dictionary parameter manually: userInfo: Tag("Blah").dictionary.

Selectively Executing Code

All log methods operate on closures. Using the same syntactic sugar as Swift's assert() function, this approach ensures we don't waste resources building log messages that won't be output anyway, while at the same time preserving a clean call site.

For example, the following log statement won't waste resources if the debug log level is suppressed:

log.debug("The description of \(thisObject) is really expensive to create")

Similarly, let's say you have to iterate through a loop in order to do some calculation before logging the result. In Objective-C, you could put that code block between #if #endif, and prevent the code from running. But in Swift, previously you would need to still process that loop, wasting resources. With XCGLogger it's as simple as:

log.debug {
    var total = 0.0
    for receipt in receipts {
        total +=

    return "Total of all receipts: \(total)"

In cases where you wish to selectively execute code without generating a log line, return nil, or use one of the methods: verboseExec, debugExec, infoExec, warningExec, errorExec, and severeExec.

Custom Date Formats

You can create your own DateFormatter object and assign it to the logger.

let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "MM/dd/yyyy hh:mma"
dateFormatter.locale = Locale.current
log.dateFormatter = dateFormatter

Enhancing Log Messages With Colour

XCGLogger supports adding formatting codes to your log messages to enable colour in various places. The original option was to use the XcodeColors plug-in. However, Xcode (as of version 8) no longer officially supports plug-ins. You can still view your logs in colour, just not in Xcode at the moment. You can use the ANSI colour support to add colour to your fileDestination objects and view your logs via a terminal window. This gives you some extra options such as adding Bold, Italics, or (please don't) Blinking!

Once enabled, each log level can have its own colour. These colours can be customized as desired. If using multiple loggers, you could alternatively set each logger to its own colour.

An example of setting up the ANSI formatter:

if let fileDestination: FileDestination = log.destination(withIdentifier: XCGLogger.Constants.fileDestinationIdentifier) as? FileDestination {
    let ansiColorLogFormatter: ANSIColorLogFormatter = ANSIColorLogFormatter()
    ansiColorLogFormatter.colorize(level: .verbose, with: .colorIndex(number: 244), options: [.faint])
    ansiColorLogFormatter.colorize(level: .debug, with: .black)
    ansiColorLogFormatter.colorize(level: .info, with: .blue, options: [.underline])
    ansiColorLogFormatter.colorize(level: .notice, with: .green, options: [.italic])
    ansiColorLogFormatter.colorize(level: .warning, with: .red, options: [.faint])
    ansiColorLogFormatter.colorize(level: .error, with: .red, options: [.bold])
    ansiColorLogFormatter.colorize(level: .severe, with: .white, on: .red)
    ansiColorLogFormatter.colorize(level: .alert, with: .white, on: .red, options: [.bold])
    ansiColorLogFormatter.colorize(level: .emergency, with: .white, on: .red, options: [.bold, .blink])
    fileDestination.formatters = [ansiColorLogFormatter]

As with filters, you can use the same formatter objects for multiple loggers and/or multiple destinations. If a destination's formatters property is nil, the logger's formatters property will be used instead.

See Extending XCGLogger below for info on creating your own custom formatters.

Alternate Configurations

By using Swift build flags, different log levels can be used in debugging versus staging/production. Go to Build Settings -> Swift Compiler - Custom Flags -> Other Swift Flags and add -DDEBUG to the Debug entry.

    log.setup(level: .debug, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true)
    log.setup(level: .severe, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true)

You can set any number of options up in a similar fashion. See the updated iOSDemo app for an example of using different log destinations based on options, search for USE_NSLOG.

Background Log Processing

By default, the supplied log destinations will process the logs on the thread they're called on. This is to ensure the log message is displayed immediately when debugging an application. You can add a breakpoint immediately after a log call and see the results when the breakpoint hits.

However, if you're not actively debugging the application, processing the logs on the current thread can introduce a performance hit. You can now specify a destination process its logs on a dispatch queue of your choice (or even use a default supplied one).

fileDestination.logQueue = XCGLogger.logQueue

or even

fileDestination.logQueue = .background)

This works extremely well when combined with the Alternate Configurations method above.

    log.setup(level: .debug, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true)
    log.setup(level: .severe, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true)
    if let consoleLog = log.logDestination(XCGLogger.Constants.baseConsoleDestinationIdentifier) as? ConsoleDestination {
        consoleLog.logQueue = XCGLogger.logQueue

Append To Existing Log File

When using the advanced configuration of the logger (see Advanced Usage above), you can now specify that the logger append to an existing log file, instead of automatically overwriting it.

Add the optional shouldAppend: parameter when initializing the FileDestination object. You can also add the appendMarker: parameter to add a marker to the log file indicating where a new instance of your app started appending. By default we'll add -- ** ** ** -- if the parameter is omitted. Set it to nil to skip appending the marker.

let fileDestination = FileDestination(writeToFile: "/path/to/file", identifier: "advancedLogger.fileDestination", shouldAppend: true, appendMarker: "-- Relauched App --")

Automatic Log File Rotation

When logging to a file, you have the option to automatically rotate the log file to an archived destination, and have the logger automatically create a new log file in place of the old one.

Create a destination using the AutoRotatingFileDestination class and set the following properties:

targetMaxFileSize: Auto rotate once the file is larger than this

targetMaxTimeInterval: Auto rotate after this many seconds

targetMaxLogFiles: Number of archived log files to keep, older ones are automatically deleted

Those are all guidelines for the logger, not hard limits.

Extending XCGLogger

You can create alternate log destinations (besides the built in ones). Your custom log destination must implement the DestinationProtocol protocol. Instantiate your object, configure it, and then add it to the XCGLogger object with add(destination:). There are two base destination classes (BaseDestination and BaseQueuedDestination) you can inherit from to handle most of the process for you, requiring you to only implement one additional method in your custom class. Take a look at ConsoleDestination and FileDestination for examples.

You can also create custom filters or formatters. Take a look at the provided versions as a starting point. Note that filters and formatters have the ability to alter the log messages as they're processed. This means you can create a filter that strips passwords, highlights specific words, encrypts messages, etc.


XCGLogger is the best logger available for Swift because of the contributions from the community like you. There are many ways you can help continue to make it great.

  1. Star the project on GitHub.
  2. Report issues/bugs you find.
  3. Suggest features.
  4. Submit pull requests.
  5. Download and install one of my apps: Try my newest app: All the Rings.
  6. You can visit my Patreon and contribute financially.

Note: when submitting a pull request, please use lots of small commits verses one huge commit. It makes it much easier to merge in when there are several pull requests that need to be combined for a new version.

To Do

  • Add more examples of some advanced use cases
  • Add additional log destination types
  • Add Objective-C support
  • Add Linux support


If you find this library helpful, you'll definitely find this other tool helpful:


Also, please check out some of my other projects:

Change Log

The change log is now in its own file:

Download Details:

Author: DaveWoodCom
Source Code: 
License: MIT license

#swift #logger #debugging #ios 

XCGLogger: A Debug Log Framework for Use in Swift Projects

Clockwork: PHP Dev tools in Your Browser - Server-side Component

Clockwork is a development tool for PHP available right in your browser. Clockwork gives you an insight into your application runtime - including request data, performance metrics, log entries, database queries, cache queries, redis commands, dispatched events, queued jobs, rendered views and more - for HTTP requests, commands, queue jobs and tests.

This repository contains the server-side component of Clockwork.

Check out on the Clockwork website for details.



Install the Clockwork library via Composer.

$ composer require itsgoingd/clockwork

Congratulations, you are done! To enable more features like commands or queue jobs profiling, publish the configuration file via the vendor:publish Artisan command.

Note: If you are using the Laravel route cache, you will need to refresh it using the route:cache Artisan command.

Read full installation instructions on the Clockwork website.


Collecting data

The Clockwork server-side component collects and stores data about your application.

Clockwork is only active when your app is in debug mode by default. You can choose to explicitly enable or disable Clockwork, or even set Clockwork to always collect data without exposing them for further analysis.

We collect a whole bunch of useful data by default, but you can enable more features or disable features you don't need in the config file.

Some features might allow for advanced options, eg. for database queries you can set a slow query threshold or enable detecting of duplicate (N+1) queries. Check out the config file to see all what Clockwork can do.

There are several options that allow you to choose for which requests Clockwork is active.

On-demand mode will collect data only when Clockwork app is open. You can even specify a secret to be set in the app settings to collect request. Errors only will record only requests ending with 4xx and 5xx responses. Slow only will collect only requests with responses above the set slow threshold. You can also filter the collected and recorded requests by a custom closure. CORS pre-flight requests will not be collected by default.

New in Clockwork 4.1, artisan commands, queue jobs and tests can now also be collected, you need to enable this in the config file.

Clockwork also collects stack traces for data like log messages or database queries. Last 10 frames of the trace are collected by default. You can change the frames limit or disable this feature in the configuration file.

Viewing data

Web interface

Open to view and interact with the collected data.

The app will show all executed requests, which is useful when the request is not made by browser, but for example a mobile application you are developing an API for.

Browser extension

A browser dev tools extension is also available for Chrome and Firefox:


Clockwork now gives you an option to show basic request information in the form of a toolbar in your app.

The toolbar is fully rendered client-side and requires installing a tiny javascript library.

Learn more on the Clockwork website.


You can log any variable via the clock() helper, from a simple string to an array or object, even multiple values:

clock(User::first(), auth()->user(), $username)

The clock() helper function returns it's first argument, so you can easily add inline debugging statements to your code:


If you want to specify a log level, you can use the long-form call:

clock()->info("User {$username} logged in!")


Timeline gives you a visual representation of your application runtime.

To add an event to the timeline - start it with a description, execute the tracked code and finish the event. A fluent api is available to further configure the event.

// using timeline api with begin/end and fluent configuration
clock()->event('Importing tweets')->color('purple')->begin();
clock()->event('Importing tweets')->end();

Alternatively you can execute the tracked code block as a closure. You can also choose to use an array based configuration instead of the fluent api.

// using timeline api with run and array-based configuration
clock()->event('Updating cache', [ 'color' => 'green' ])->run(function () {

Read more about available features on the Clockwork website.

Download Details:

Author: itsgoingd
Source Code: 
License: MIT license

#php #debugging #laravel #logging 

Clockwork: PHP Dev tools in Your Browser - Server-side Component

9 Favorite PHP Libraries for Debugging and Profiling

In today's post we will learn about 9 Favorite PHP Libraries for Debugging and Profiling.

What is Debugging and Profiling?

Debugging - Getting the code to work as you intended; profiling - Assessing how the code carries out a given scientific task on a given platform and how its performance might be improved; validation - Assessing how accurately the code carries out a given scientific task.

Table of contents:

  • Barbushin PHP Console - Another web debugging console using Google Chrome.
  • Kint - A debugging and profiling tool.
  • Metrics - A simple metrics API library.
  • PCOV - A self contained code coverage compatible driver.
  • PHP Console - A web debugging console.
  • PHPBench - A benchmarking Framework.
  • PHPSpy - A low-overhead sampling profiler.
  • Tracy - A simple error detection, logging and time measuring library.
  • Whoops - A pretty error handling library.

1 - Barbushin PHP Console:

Another web debugging console using Google Chrome.

PHP Console allows you to handle PHP errors & exceptions, dump variables, execute PHP code remotely and many other things using Google Chrome extension PHP Console and PhpConsole server library.



	"require": {
		"php-console/php-console": "^3.1"


$ composer require php-console/php-console


You can try most of PHP Console features on live demo server.


There is a PhpConsole\Connector class that initializes connection between PHP server and Google Chrome extension. Connection is initialized when PhpConsole\Connector instance is initialized:

$connector = PhpConsole\Connector::getInstance();

Also it will be initialized when you call PhpConsole\Handler::getInstance() or PhpConsole\Helper::register().

Communication protocol

PHP Console uses headers to communicate with client, so PhpConsole\Connector::getInstance() or PhpConsole\Handler::getInstance() must be called before any output. If headers are sent before script shut down or PHP Console response package size is out of web-server headers size limit, then PHP Console will store response data in PhpConsole\Storage implementation and send it to client in STDOUT, in additional HTTP request. So there is no limit in PHP Console response package size.

Troubleshooting with $_SESSION handler overridden in some frameworks

By default PHP Console uses PhpConsole\Storage\Session for postponed responses, so all temporary data will be stored in $_SESSION. But there is some problem with frameworks like Symfony and Laravel that overrides PHP session handler. In this case you should use any other PhpConsole\Storage implementation like:

// Can be called only before PhpConsole\Connector::getInstance() and PhpConsole\Handler::getInstance()
PhpConsole\Connector::setPostponeStorage(new PhpConsole\Storage\File('/tmp/'));

See all available PhpConsole\Storage implementations in /src/PhpConsole/Storage.

Strip sources base path

If you want to see errors sources and traces paths more short, call:


So paths like /path/to/project/module/file.php will be displayed on client as /module/file.php.

Works with different server encodings

If your internal server encoding is not UTF-8, so you need to call:


Initialization performance

PhpConsole server library is optimized for lazy initialization only for clients that have Google Chrome extension PHP Console installed. There is example of correct initialization PhpConsole on your production server.

View on Github

2 - Kint:

A debugging and profiling tool.

What am I looking at?

At first glance Kint is just a pretty replacement for var_dump(), print_r() and debug_backtrace().

However, it's much, much more than that. You will eventually wonder how you developed without it.


One of the main goals of Kint is to be zero setup.

Download the file and simply


require 'kint.phar';

Or, if you use Composer:

composer require kint-php/kint --dev



Kint::dump($GLOBALS, $_SERVER); // pass any number of parameters
d($GLOBALS, $_SERVER); // or simply use d() as a shorthand

Kint::trace(); // Debug backtrace

s($GLOBALS); // Basic output mode

~d($GLOBALS); // Text only output mode

Kint::$enabled_mode = false; // Disable kint
d('Get off my lawn!'); // Debugs no longer have any effect

View on Github

3 - Metrics:

A simple metrics API library.

Simple library that abstracts different metrics collectors. I find this necessary to have a consistent and simple metrics API that doesn't cause vendor lock-in.


Using Composer:

composer require beberlei/metrics


You can instantiate clients:


$collector = \Beberlei\Metrics\Factory::create('statsd');

You can measure stats:



$start = microtime(true);
$diff  = microtime(true) - $start;
$collector->timing('', $diff);

$value = 1234;
$collector->measure('', $value);

Some backends defer sending and aggregate all information, make sure to call flush:




$statsd = \Beberlei\Metrics\Factory::create('statsd');

$zabbix = \Beberlei\Metrics\Factory::create('zabbix', array(
    'hostname' => '',
    'server'   => 'localhost',
    'port'     => 10051,

$zabbixConfig = \Beberlei\Metrics\Factory::create('zabbix_file', array(
    'hostname' => '',
    'file'     => '/etc/zabbix/zabbix_agentd.conf'

$librato = \Beberlei\Metrics\Factory::create('librato', array(
    'hostname' => '',
    'username' => 'foo',
    'password' => 'bar',

$null = \Beberlei\Metrics\Factory::create('null');

View on Github

4 - PCOV:

A self contained code coverage compatible driver.


 * Shall start recording coverage information
function \pcov\start() : void;

 * Shall stop recording coverage information
function \pcov\stop() : void;

 * Shall collect coverage information
 * @param integer $type define witch type of information should be collected
 *		 \pcov\all        shall collect coverage information for all files
 *		 \pcov\inclusive  shall collect coverage information for the specified files
 *		 \pcov\exclusive  shall collect coverage information for all but the specified files
 * @param array $filter path of files (realpath) that should be filtered
 * @return array
function \pcov\collect(int $type = \pcov\all, array $filter = []) : array;

 * Shall clear stored information
 * @param bool $files set true to clear file tables
 * Note: clearing the file tables may have surprising consequences
function \pcov\clear(bool $files = false) : void;

 * Shall return list of files waiting to be collected
function \pcov\waiting() : array;

 * Shall return the current size of the trace and cfg arena
function \pcov\memory() : int;


PCOV is configured using PHP.ini:

pcov.enabled1SYSTEMenable or disable zend hooks for pcov
pcov.directoryautoSYSTEM,PERDIRrestrict collection to files under this path
pcov.excludeunusedSYSTEM,PERDIRexclude files under matching this PCRE
pcov.initial.memory65536SYSTEM,PERDIRshall set initial size of arena
pcov.initial.files64SYSTEM,PERDIRshall set initial size of tables


The recommended defaults for production should be:

  • pcov.enabled = 0

The recommended defaults for development should be:

  • pcov.enabled = 1
  • = /path/to/your/source/directory

When is left unset, PCOV will attempt to find src, lib or, app in the current working directory, in that order; If none are found the current directory will be used, which may waste resources storing coverage information for the test suite.

If contains test code, it's recommended to set pcov.exclude to avoid wasting resources.

To avoid unnecessary allocation of additional arenas for traces and control flow graphs, pcov.initial.memory should be set according to the memory required by the test suite, which may be discovered with \pcov\memory().

To avoid reallocation of tables, pcov.initial.files should be set to a number higher than the number of files that will be loaded during testing, inclusive of test files.

Note that arenas are allocated in chunks: If the chunk size is set to 65536 and pcov require 65537 bytes, the system will allocate two chunks, each 65536 bytes. When setting arena space therefore, be generous in your estimates.

View on Github

5 - PHP Console:

A web debugging console.

Creating a test file or using php's interactive mode can be a bit cumbersome to try random php snippets. This allows you to run small bits of code easily right from your browser.

It is secure since accessible only from the local host, and very easy to setup and use.


Clone the git repo or download it as a zip/tarball, drop it somewhere in your local web document root and access it with http://localhost/path/to/php-console

You can also install it with Composer using this command:

composer create-project --stability=dev --keep-vcs seld/php-console

To update it just run git pull in the directory to pull the latest changes in.

You can use the internal PHP server too.
run php -S localhost:1337 in a terminal and go to http://localhost:1337/.


Default settings are available in config.php.dist, if you would like to modify them, you can copy the file to config.php and edit settings.


Code contributions or ideas are obviously much welcome. Send pull requests or issues on github.


  • 1.5.0-dev
    • Added melody-script integration. requires a composer binary within the systems/webservers PATH env variable.
    • Updated bundled ACE editor to 1.1.8
    • Layout is now flex-css based
    • Added a new bootstrap option to be include before source evaluation
    • Moved tabsize, ip-whitelist into an option
    • Added servers-side runtime information, to be rendered in the consoles statusbar
    • Allow configuring options
  • 1.4.0
    • Added control-char escaping to make them more visible
  • 1.3.0
    • Added code persistence across sessions in localStorage + a reset button
  • 1.2.3
    • Fixed syntax highlighting
    • Fixed some styling issues
    • Fixed ajax error handling for non responding backends
  • 1.2.2
    • Updated ACE to latest version
    • Added composer.json support
  • 1.2.1
    • Performance fixes for ACE editor integration
    • JS is no longer a requirement
  • 1.2.0
    • Replaced built-in editor with ACE editor which provides highlighting and other features
    • Handle old setups with magic_quotes enabled
  • 1.1.2
    • Fixed issue with IPv6 loopback not being whitelisted
  • 1.1.1
    • Cross-browser compatibility enhancements
  • 1.1.0
    • Script execution is now done via an async js request, preventing die() and exception to mess up the entire console
    • Added a status bar with char/line display
    • Added a toggle button to expand/collapse all krumo sub-trees at once
    • Cross-browser compatibility enhancements
    • Removing a tab (i.e. 4 spaces) on backspace now
    • Made tab character(s) configurable (see index.php)
  • 1.0.0
    • Initial Public Release

View on Github

6 - PHPBench:

A benchmarking Framework.

PHPBench is a benchmark runner for PHP analogous to PHPUnit but for performance rather than correctness.

Features include:

  • Revolutions: Repeat your code many times to determine average execution time.
  • Iterations: Sample your revolutions many times and review aggregated statistical data.
  • Process Isolation: Each iteration is executed in a separate process.
  • Reporting: Customizable reports and various output formats (e.g. console, CSV, Markdown, HTML).
  • Report storage and comparison: Store benchmarks locally to be used as a baseline reference, or to reference them later.
  • Memory Usage: Keep an eye on the amount of memory used by benchmarking subjects.
  • Assertions: Assert that code is performing within acceptable limits, or that it has not regressed from a previously recorded baseline.


composer require phpbench/phpbench --dev

See the installation instructions for more options.


Running benchmarks and comparing against a baseline:


Aggregated report:


Blinken logger:


View on Github

7 - PHPSpy:

A low-overhead sampling profiler.

phpspy is a low-overhead sampling profiler for PHP. It works with non-ZTS PHP 7.0+ with CLI, Apache, and FPM SAPIs on 64-bit Linux 3.2+.


$ git clone
Cloning into 'phpspy'...
$ cd phpspy
$ make
$ sudo ./phpspy --limit=1000 --pid=$(pgrep -n httpd) >traces
$ ./ <traces | ./vendor/ >flame.svg
$ google-chrome flame.svg # View flame.svg in browser

Build options

$ make                   # Use built-in structs
$ # or
$ USE_ZEND=1 make ...    # Use Zend structs (requires PHP development headers)


$ ./phpspy -h
  phpspy [options] -p <pid>
  phpspy [options] -P <pgrep-args>
  phpspy [options] [--] <cmd>

  -h, --help                         Show this help
  -p, --pid=<pid>                    Trace PHP process at `pid`
  -P, --pgrep=<args>                 Concurrently trace processes that
                                       match pgrep `args` (see also `-T`)
  -T, --threads=<num>                Set number of threads to use with `-P`
                                       (default: 16)
  -s, --sleep-ns=<ns>                Sleep `ns` nanoseconds between traces
                                       (see also `-H`) (default: 10101010)
  -H, --rate-hz=<hz>                 Trace `hz` times per second
                                       (see also `-s`) (default: 99)
  -V, --php-version=<ver>            Set PHP version
                                       (default: auto;
                                       supported: 70 71 72 73 74 80 81 82)
  -l, --limit=<num>                  Limit total number of traces to capture
                                       (approximate limit in pgrep mode)
                                       (default: 0; 0=unlimited)
  -i, --time-limit-ms=<ms>           Stop tracing after `ms` milliseconds
                                       (second granularity in pgrep mode)
                                       (default: 0; 0=unlimited)
  -n, --max-depth=<max>              Set max stack trace depth
                                       (default: -1; -1=unlimited)
  -r, --request-info=<opts>          Set request info parts to capture
                                       (q=query c=cookie u=uri p=path
                                       (default: QCUP; none)
  -m, --memory-usage                 Capture peak and current memory usage
                                       with each trace (requires target PHP
                                       process to have debug symbols)
  -o, --output=<path>                Write phpspy output to `path`
                                       (default: -; -=stdout)
  -O, --child-stdout=<path>          Write child stdout to `path`
                                       (default: phpspy.%d.out)
  -E, --child-stderr=<path>          Write child stderr to `path`
                                       (default: phpspy.%d.err)
  -x, --addr-executor-globals=<hex>  Set address of executor_globals in hex
                                       (default: 0; 0=find dynamically)
  -a, --addr-sapi-globals=<hex>      Set address of sapi_globals in hex
                                       (default: 0; 0=find dynamically)
  -1, --single-line                  Output in single-line mode
  -b, --buffer-size=<size>           Set output buffer size to `size`.
                                       Note: In `-P` mode, setting this
                                       above PIPE_BUF (4096) may lead to
                                       interlaced writes across threads
                                       unless `-J m` is specified.
                                       (default: 4096)
  -f, --filter=<regex>               Filter output by POSIX regex
                                       (default: none)
  -F, --filter-negate=<regex>        Same as `-f` except negated
  -d, --verbose-fields=<opts>        Set verbose output fields
                                       (p=pid t=timestamp
                                       (default: PT; none)
  -c, --continue-on-error            Attempt to continue tracing after
                                       encountering an error
  -#, --comment=<any>                Ignored; intended for self-documenting
  -@, --nothing                      Ignored
  -v, --version                      Print phpspy version and exit

Experimental options:
  -j, --event-handler=<handler>      Set event handler (fout, callgrind)
                                       (default: fout)
  -J, --event-handler-opts=<opts>    Set event handler options
                                       (fout: m=use mutex to prevent
                                       interlaced writes on stdout in `-P`
  -S, --pause-process                Pause process while reading stacktrace
                                       (unsafe for production!)
  -e, --peek-var=<varspec>           Peek at the contents of the var located
                                       at `varspec`, which has the format:
                                       e.g., xyz@/path/to.php:10-20
  -g, --peek-global=<glospec>        Peek at the contents of a global var
                                       located at `glospec`, which has
                                       the format: <global>.<key>
                                       where <global> is one of:
                                       e.g., server.REQUEST_TIME
  -t, --top                          Show dynamic top-like output

View on Github

8- Tracy:

A simple error detection, logging and time measuring library.


Tracy library is a useful helper for everyday PHP programmers. It helps you to:

  • quickly detect and correct errors
  • log errors
  • dump variables
  • measure execution time of scripts/queries
  • see memory consumption

PHP is a perfect language for making hardly detectable errors because it gives great flexibility to programmers. Tracy\Debugger is more valuable because of that. It is an ultimate tool among the diagnostic ones. If you are meeting Tracy for the first time, believe me, your life starts to be divided into one before the Tracy and the one with her. Welcome to the good part!

Installation and requirements

The recommended way to is via Composer:

composer require tracy/tracy

Alternatively, you can download the whole package or tracy.phar file.

Tracycompatible with PHPcompatible with browsers
Tracy 3.0PHP 8.0 – 8.2Chrome 64+, Firefox 69+, Safari 15.4+ and iOS Safari 15.4+
Tracy 2.9PHP 7.2 – 8.2Chrome 64+, Firefox 69+, Safari 13.1+ and iOS Safari 13.4+
Tracy 2.8PHP 7.2 – 8.1Chrome 55+, Firefox 53+, Safari 11+ and iOS Safari 11+
Tracy 2.7PHP 7.1 – 8.0Chrome 55+, Firefox 53+, MS Edge 16+, Safari 11+ and iOS Safari 11+
Tracy 2.6PHP 7.1 – 8.0Chrome 49+, Firefox 45+, MS Edge 14+, Safari 10+ and iOS Safari 10.2+
Tracy 2.5PHP 5.4 – 7.4Chrome 49+, Firefox 45+, MS Edge 12+, Safari 10+ and iOS Safari 10.2+
Tracy 2.4PHP 5.4 – 7.2Chrome 29+, Firefox 28+, IE 11+ (except AJAX), MS Edge 12+, Safari 9+ and iOS Safari 9.2+


Activating Tracy is easy. Simply add these two lines of code, preferably just after library loading (like require 'vendor/autoload.php') and before any output is sent to browser:

use Tracy\Debugger;


The first thing you will notice on the website is a Debugger Bar.

(If you do not see anything, it means that Tracy is running in production mode. For security reasons, Tracy is visible only on localhost. You may force Tracy to run in development mode by passing the Debugger::DEVELOPMENT as the first parameter of enable() method.)

The enable() involves changing the error reporting level to E_ALL.

Debugger Bar

The Debugger Bar is a floating panel. It is displayed in the bottom right corner of a page. You can move it using the mouse. It will remember its position after the page reloading.


You can add other useful panels to the Debugger Bar. You can find interesting ones in addons or you can create your own.

If you do not want to show Debugger Bar, set:

Debugger::$showBar = false;

Visualization of errors and exceptions

Surely, you know how PHP reports errors: there is something like this in the page source code:

<b>Parse error</b>:  syntax error, unexpected '}' in <b>HomepagePresenter.php</b> on line <b>15</b>

or uncaught exception:

<b>Fatal error</b>:  Uncaught Nette\MemberAccessException: Call to undefined method Nette\Application\UI\Form::addTest()? in /sandbox/vendor/nette/utils/src/Utils/ObjectMixin.php:100
Stack trace:
#0 /sandbox/vendor/nette/utils/src/Utils/Object.php(75): Nette\Utils\ObjectMixin::call(Object(Nette\Application\UI\Form), 'addTest', Array)
#1 /sandbox/app/forms/SignFormFactory.php(32): Nette\Object-&gt;__call('addTest', Array)
#2 /sandbox/app/presenters/SignPresenter.php(21): App\Forms\SignFormFactory-&gt;create()
#3 /sandbox/vendor/nette/component-model/src/ComponentModel/Container.php(181): App\Presenters\SignPresenter-&gt;createComponentSignInForm('signInForm')
#4 /sandbox/vendor/nette/component-model/src/ComponentModel/Container.php(139): Nette\ComponentModel\Container-&gt;createComponent('signInForm')
#5 /sandbox/temp/cache/latte/15206b353f351f6bfca2c36cc.php(17): Nette\ComponentModel\Co in <b>/sandbox/vendor/nette/utils/src/Utils/ObjectMixin.php</b> on line <b>100</b><br />

View on Github

9 - Whoops:

A pretty error handling library.

Whoops is an error handler framework for PHP. Out-of-the-box, it provides a pretty error interface that helps you debug your web projects, but at heart it's a simple yet powerful stacked error handling system.


  • Flexible, stack-based error handling
  • Stand-alone library with (currently) no required dependencies
  • Simple API for dealing with exceptions, trace frames & their data
  • Includes a pretty rad error page for your webapp projects
  • Includes the ability to open referenced files directly in your editor and IDE
  • Includes handlers for different response formats (JSON, XML, SOAP)
  • Easy to extend and integrate with existing libraries
  • Clean, well-structured & tested code-base


If you use Laravel 4, Laravel 5.5+ or Mezzio, you already have Whoops. There are also community-provided instructions on how to integrate Whoops into Silex 1, Silex 2, Phalcon, Laravel 3, Laravel 5, CakePHP 3, CakePHP 4, Zend 2, Zend 3, Yii 1, FuelPHP, Slim, Pimple, Laminas, or any framework consuming StackPHP middlewares or PSR-7 middlewares.

If you are not using any of these frameworks, here's a very simple way to install:

Use Composer to install Whoops into your project:

composer require filp/whoops

Register the pretty handler in your code:

$whoops = new \Whoops\Run;
$whoops->pushHandler(new \Whoops\Handler\PrettyPageHandler);

For more options, have a look at the example files in examples/ to get a feel for how things work. Also take a look at the API Documentation and the list of available handlers below.

You may also want to override some system calls Whoops does. To do that, extend Whoops\Util\SystemFacade, override functions that you want and pass it as the argument to the Run constructor.

You may also collect the HTML generated to process it yourself:

$whoops = new \Whoops\Run;
$whoops->pushHandler(new \Whoops\Handler\PrettyPageHandler);
$html = $whoops->handleException($e);

Available Handlers

whoops currently ships with the following built-in handlers, available in the Whoops\Handler namespace:

  • PrettyPageHandler - Shows a pretty error page when something goes pants-up
  • PlainTextHandler - Outputs plain text message for use in CLI applications
  • CallbackHandler - Wraps a closure or other callable as a handler. You do not need to use this handler explicitly, whoops will automatically wrap any closure or callable you pass to Whoops\Run::pushHandler
  • JsonResponseHandler - Captures exceptions and returns information on them as a JSON string. Can be used to, for example, play nice with AJAX requests.
  • XmlResponseHandler - Captures exceptions and returns information on them as a XML string. Can be used to, for example, play nice with AJAX requests.

You can also use pluggable handlers, such as SOAP handler.

View on Github

Thank you for following this article.

Related videos:

Debugging PHP with XDebug and VsCode

#php #debugging #profile 

9 Favorite PHP Libraries for Debugging and Profiling
Lawrence  Lesch

Lawrence Lesch


Tiny JavaScript Debugging Utility Modelled After Node.js Core's Debug


A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers.


$ npm install debug


debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.

Example app.js:

var debug = require('debug')('http')
  , http = require('http')
  , name = 'My App';

// fake app

debug('booting %o', name);

http.createServer(function(req, res){
  debug(req.method + ' ' + req.url);
}).listen(3000, function(){

// fake worker of some kind


Example worker.js:

var a = require('debug')('worker:a')
  , b = require('debug')('worker:b');

function work() {
  a('doing lots of uninteresting work');
  setTimeout(work, Math.random() * 1000);


function workb() {
  b('doing some work');
  setTimeout(workb, Math.random() * 2000);


The DEBUG environment variable is then used to enable these based on space or comma-delimited names.

Here are some examples:

screen shot 2017-08-08 at 12 53 04 pm screen shot 2017-08-08 at 12 53 38 pm screen shot 2017-08-08 at 12 53 25 pm

Windows command prompt notes


On Windows the environment variable is set using the set command.

set DEBUG=*,-not_this


set DEBUG=* & node app.js

PowerShell (VS Code default)

PowerShell uses different syntax to set environment variables.

$env:DEBUG = "*,-not_this"


$env:DEBUG='app';node app.js

Then, run the program to be debugged as usual.

npm script example:

  "windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js",

Namespace Colors

Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to.


In Node.js, colors are enabled when stderr is a TTY. You also should install the supports-color module alongside debug, otherwise debug will only use a small handful of basic colors.

Web Browser

Colors are also enabled on "Web Inspectors" that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).

Millisecond diff

When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls.

When stdout is not a TTY, Date#toISOString() is used, making it more useful for logging the debug information as shown below:


If you're using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output.


The * character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.

You can also exclude specific debuggers by prefixing them with a "-" character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with "connect:".

Environment Variables

When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:

DEBUGEnables/disables specific debugging namespaces.
DEBUG_HIDE_DATEHide date from debug output (non-TTY).
DEBUG_COLORSWhether or not to use colors in the debug output.
DEBUG_DEPTHObject inspection depth.
DEBUG_SHOW_HIDDENShows hidden properties on inspected objects.

Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.


Debug uses printf-style formatting. Below are the officially supported formatters:

%OPretty-print an Object on multiple lines.
%oPretty-print an Object all on a single line.
%dNumber (both integer and float).
%jJSON. Replaced with the string '[Circular]' if the argument contains circular references.
%%Single percent sign ('%'). This does not consume an argument.

Custom formatters

You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:

const createDebug = require('debug')
createDebug.formatters.h = (v) => {
  return v.toString('hex')

// …elsewhere
const debug = createDebug('foo')
debug('this is hex: %h', new Buffer('hello world'))
//   foo this is hex: 68656c6c6f20776f726c6421 +0ms

Browser Support

You can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don't want to build it yourself.

Debug's enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:

localStorage.debug = 'worker:*'

And then refresh the page.

a = debug('worker:a');
b = debug('worker:b');

  a('doing some work');
}, 1000);

  b('doing some work');
}, 1200);

In Chromium-based web browsers (e.g. Brave, Chrome, and Electron), the JavaScript console will—by default—only show messages logged by debug if the "Verbose" log level is enabled.

Output streams

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:

Example stdout.js:

var debug = require('debug');
var error = debug('app:error');

// by default stderr is used
error('goes to stderr!');

var log = debug('app:log');
// set this namespace to log via console.log
log.log = console.log.bind(console); // don't forget to bind to console!
log('goes to stdout');
error('still goes to stderr!');

// set all output to go via
// overrides all per-namespace log settings
debug.log =;
error('now goes to stdout via');
log('still goes to stdout, but via now');


You can simply extend debugger

const log = require('debug')('auth');

//creates new debug instance with extended namespace
const logSign = log.extend('sign');
const logLogin = log.extend('login');

log('hello'); // auth hello
logSign('hello'); //auth:sign hello
logLogin('hello'); //auth:login hello

Set dynamically

You can also enable debug dynamically by calling the enable() method :

let debug = require('debug');

console.log(1, debug.enabled('test'));

console.log(2, debug.enabled('test'));

console.log(3, debug.enabled('test'));

print :

1 false
2 true
3 false

Usage :
namespaces can include modes separated by a colon and wildcards.

Note that calling enable() completely overrides previously set DEBUG variable :

$ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))'
=> false


Will disable all namespaces. The functions returns the namespaces currently enabled (and skipped). This can be useful if you want to disable debugging temporarily without knowing what was enabled to begin with.

For example:

let debug = require('debug');
let namespaces = debug.disable();

Note: There is no guarantee that the string will be identical to the initial enable string, but semantically they will be identical.

Checking whether a debug target is enabled

After you've created a debug instance, you can determine whether or not it is enabled by checking the enabled property:

const debug = require('debug')('http');

if (debug.enabled) {
  // do stuff...

You can also manually toggle this property to force the debug instance to be enabled or disabled.

Usage in child processes

Due to the way debug detects if the output is a TTY or not, colors are not shown in child processes when stderr is piped. A solution is to pass the DEBUG_COLORS=1 environment variable to the child process.
For example:

worker = fork(WORKER_WRAP_PATH, [workerPath], {
  stdio: [
    /* stdin: */ 0,
    /* stdout: */ 'pipe',
    /* stderr: */ 'pipe',
  env: Object.assign({}, process.env, {
    DEBUG_COLORS: 1 // without this settings, colors won't be shown

worker.stderr.pipe(process.stderr, { end: false });

Download Details:

Author: Debug-js
Source Code: 
License: MIT license

#javascript #debugging #node #browser 

Tiny JavaScript Debugging Utility Modelled After Node.js Core's Debug
Nat  Grady

Nat Grady


Boomer: Debugging tools To inspect The Intermediate Steps Of A Call


The {boomer} package provides debugging tools that let you inspect the intermediate results of a call. The output looks as if we explode a call into its parts hence the name.

  • boom() prints the intermediate results of a call or a code chunk.
  • rig() creates a copy of a function which will display the intermediate results of all the calls of it body.
  • rig_in_namespace() rigs a namespaced function in place, so its always verbose even when called by other existing functions. It is especially handy for package development.


Install CRAN version with:


Or development version with:



boom(1 + !1 * 2)

boom(subset(head(mtcars, 2), qsec > 17))

You can use boom() with {magrittr} pipes, just pipe to boom() at the end of a pipe chain.

mtcars %>%
  head(2) %>%
  subset(qsec > 17) %>%

If a call fails, {boomer} will print intermediate outputs up to the occurrence of the error, it can help with debugging:

"tomato" %>%
  substr(1, 3) %>%
  toupper() %>%
  sqrt() %>%

boom() features optional arguments :

clock: set to TRUE to see how long each step (in isolation!) took to run.

print: set to a function such as str to change what is printed (see ?boom to see how to print differently depending on class). Useful alternatives would be dplyr::glimpse of invisible (to print nothing).

One use case is when the output is too long.

boom(lapply(head(cars), sqrt), clock = TRUE, print = str)

boom() also works works on loops and multi-line expression.

 boom(for(i in 1:3) paste0(i, "!"))


rig() a function in order to boom() its body, its arguments are printed by default when they’re evaluated.

hello <- function(x) {
  if(!is.character(x) | length(x) != 1) {
    stop("`x` should be a string")
  paste0("Hello ", x, "!")


rig_in_namespace() was designed to assist package development. Functions are rigged in place and we can explode the calls of the bodies of several functions at a time.

For instance you might have these functions in a package :

cylinder_vol <- function(r, h) {
  h * disk_area(r)

disk_area <- function(r) {
  pi * r^2

cylinder_vol depends on disk_area, call devtools::load_all() then rig_in_namespace() on both and enjoy the detailed output:

rig_in_namespace(cylinder_vol, disk_area)


To avoid typing boom() all the time you can use the provided addin named “Explode a call with boom()”: just attribute a key combination to it (I use ctrl+shift+alt+B on windows), select the call you’d like to explode and fire away!


Several options are proposed to weak he printed output of {boomer}’s functions and addin, see ?boomer to learn about them.

In particular on some operating systems *{boomer}*’s functions’ output might not always look good in markdown report or reprexes. It’s due to how he system handles UTF-8 characters. In this case one can use options(boomer.safe_print = TRUE) for a more satisfactory input.


{boomer} prints the output of intermediate steps as they are executed, and thus doesn’t say anything about what isn’t executed, it is in contrast with functions like lobstr::ast() which return the parse tree.

Thanks to @data_question for suggesting the name {boomer} on twitter.

Download Details:

Author: Moodymudskipper
Source Code: 

#r #debugging #tools 

Boomer: Debugging tools To inspect The Intermediate Steps Of A Call

DebuggingUtilities.jl: Simple Utilities for Debugging Julia Code


This package contains simple utilities that may help debug julia code.


Install with

pkg> dev

When you use it in packages, you should activate the project and add DebuggingUtilities as a dependency use project> dev DebuggingUtilities.



@showln shows variable values and the line number at which the statement was executed. This can be useful when variables change value in the course of a single function. For example:

using DebuggingUtilities

function foo()
    x = 5
    @showln x
    x = 7
    @showln x

might, when called (foo()), produce output like

x = 5
(in /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:5)
x = 7
(in /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:7)


@showlnt is for recursion, and uses indentation to show nesting depth. For example,

function recurses(n)
    @showlnt n
    n += 1
    @showlnt n
    if n < 10
        n = recurses(n+1)
    return n

might, when called as recurses(1), generate

                                 n = 1
                                 (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
                                 n = 2
                                 (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)
                                  n = 3
                                  (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
                                  n = 4
                                  (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)
                                   n = 5
                                   (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
                                   n = 6
                                   (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)
                                    n = 7
                                    (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
                                    n = 8
                                    (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)
                                     n = 9
                                     (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
                                     n = 10
                                     (in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)

Each additional space indicates one additional layer in the call chain. Most of the initial space (even for n=1) is due to Julia's own REPL.


This is similar to include, except it displays progress. This can be useful in debugging long scripts that cause, e.g., segfaults.


Also similar to include, but it also measures the execution time of each expression, and prints them in order of increasing duration.

Download Details:

Author: Timholy
Source Code: 
License: View license

#julia #debugging #code 

DebuggingUtilities.jl: Simple Utilities for Debugging Julia Code