1673607720
Adrenaline is a debugger powered by the OpenAI Codex. It not only fixes your code, but teaches you along the way.
Adrenaline can be used here. Simply plug in your broken code and an error message (e.g. a stack trace or natural language description of the error) and click "Debug."
Note that you will have to supply your own OpenAI API key. This is to prevent API misuse.
To run locally, clone the repository and run the following:
$ npm install
$ npm run start-local
Adrenaline sends your code and error message to the OpenAI Edit & Insert API (code-davinci-edit-001
), which returns code changes that might fix your error (or at least give you a starting point). The proposed fixes are displayed in-line like a diff, with the option to accept, reject, or modify each code change.
Not only does Adrenaline propose fixes for your errors, but it also explains errors in plain English using GPT-3 (text-davinci-003
).
If your code isn't throwing an exception, it may still contain bugs. Adrenaline can scan your code for potential issues and propose fixes for them, if any exist.
Right now, Adrenaline is just a simple wrapper around GPT-3, meant to demonstrate what's possible with AI-driven debugging. There's many ways it can be improved:
Ultimately, while the OpenAI Codex is surprisingly good at debugging code, I believe a more specialized model trained on all publicly available code could yield better results. There are interesting research questions here, such as how to generate synthetic training data (i.e. how can you systematically break code in a random but non-trivial way?).
Malik Drabla for helping build the initial PoC during AI Hack Week. Ramsey Lehman for design feedback. Paul Bogdan, Michael Usachenko, and Samarth Makhija for various other feedback.
Author: Shobrook
Source Code: https://github.com/shobrook/adrenaline
License: MIT license
1672350180
Catch-all SMTP server for local debugging purposes.
This SMTP server catches all e-mail being sent through it and provides an interface to inspect the e-mails.
Note: this SMTP server is meant to be run locally. As such several security considerations (e.g. SMTP transaction delays) have been omitted by design. Never run this project as a public service.
This project is currently working towards a first stable release version.
The master branch of this project will always be in a functioning state and will always point to the last release.
All active development should be based off the v0.4.0 branch.
AUTH
command)composer create-project peehaa/mailgrab
Download the latest phar file from the releases page.
./bin/mailgrab
will start MailGrab using the default configuration:
See ./bin/mailgrab --help
for more configuration options
Once the MailGrab server is started you can point your browser to http://localhost:9000
to access the webinterface.
If you send a mail to the server over port 9025 it will automatically be displayed in the webinterface.
There are example mail scripts available under ./examples
(e.g. php examples/full-test.php
) which you can run to test the functionality.
/path/to/mailgrab.phar
will start MailGrab using the default configuration:
See /path/to/mailgrab.phar --help
for more configuration options
To get started run npm install
.
An NPM build script is provided and can be used by running npm run build
in the project root.
Currently all active development has to be based off the v0.4.0 branch.
If you want to build a phar you can run the build script located at ./bin/build
which will create a new build in the ./build
directory.
Author: PeeHaa
Source Code: https://github.com/PeeHaa/mailgrab
License: MIT license
1670946660
In order to debug application running in it, first, it is necessary to understand it and Docker containers. So, let’s start with a quick introduction:
The key features of Kubernetes are listed below:
It can scale horizontally and lets us deploy a pod and different containers. Your containers can be scaled when you have it configured automatically for debugging applications in it.
It has self-healing capabilities; also, it is the best feature for it. It automatically restarts the container.
Automated Scheduling is a feature of Managed Kubernetes. Its scheduler is a critical part of the platform. Matchmaking a pod with a node Scheduler is responsible for that.
Load distribution is load balancing; at the dispatch level, it is easy to implement.
It is also a useful feature for rollout and rollbacks any change according to the requirement. If something goes wrong, it will be helpful for rollback and for any change update it is useful for rollout also after performing debug application running in its process.
It is also the best feature that we can mount the storage system according to our wish. Many more features are available in it that are mentioned in the above diagram.
It used a client-server architecture in diagram master play role as server and node play role as a client. It is a possible multi-master or server setup by which there is only a single master server that plays the role of controlling the client/node. The server and client consist of various components. That describes the following:
A primary and vital component of the master node is the following:
It iss a critical part of the platform. Matchmaking a pod with a node, Scheduler is responsible for that. It schedules on the best fit node after reading the requirements of the service.
For managing controller processes with dependencies on the underlying cloud provider, the cloud controller manager is responsible. For example, when a controller requires to check volume in the cloud infrastructure or load balancer, these are handled by the cloud controller manager. Some time needs to check if a node was terminated or set up routes, then this is also governed by them.
Cloud control manager and Kube controller manager both are different from each other there also working differently while debugging applications in Kubernetes architecture.
It helps us to aggregate our available resources; basically, it is a collection of servers/hosts. It is only accessible for security reasons and from the API server. It stores the configuration details.
The vital components of the Client/Node node is the following:
A pod is a collection of containers. Containers can not run directly by it. Any container will share the same resources and the local network in the same pod; the container can easily communicate with each other container in a pod.
It is responsible for maintaining all pods in which contain the set of containers. It works to insure that pods and their containers are running in the right state, and all are healthy.
Debugging on Kubernetes consists of different services. Each service is running in its container. Developing and to debug applications running in its clusters can be large and heavy, for this requires us to have a shell on a running container after then all your tools running in the remote body. Telepresence is a tool for debugging applications in it locally without any difficulty. Telepresence allows us to use custom tools like IDE and debugger. This document describes the telepresence used for debugging and developing services that are running on a cluster locally. The debugging and developing services need to install its cluster telepresence and must also be installed.
We make the program or debug a single service when developing an application on it. These services required other services to debug application running in it and testing. With the telepresence, Kube proxy uses the –swap-deployment option to swap an existing deployment. Swapping allows us to connect to the remote cluster and will enable us to run a service locally by debugging applications in it.
The below are the highlighted benefits and limitations:
It is the best advantage that now developers can use other Kubernetes Security tools for debugging on it like in the Armador repo, use the telepresence tool as well as using Ksync & Squash to debug the application.
Kubernetes deployment is a concept of pods; pods are nothing but nodes which are nothing but servers where different content can be deployed in a pod. You can have a single container or multiple containers. Pods contain more containers. It can group containers that make up an application into logical units for easy management and discovery by this it identify how many nodes are there.
Original article source at: https://www.xenonstack.com/
1668840426
Developing your Django app in Docker can be very convenient. You don't have to install extra services like Postgres, Nginx, and Redis, etc. on you own machine. It also makes it much easier for a new developer to quickly get up and running.
The grass is not always greener, though. Running Django in Docker can create some problems and make what was once easy difficult. For example, how do you set breakpoints in your code and debug?
In this quick tutorial, we'll look at how PyCharm comes to the rescue with its remote interpreter and Docker integration to make it easy to debug a containerized Django app.
This post uses PyCharm Professional Edition v2021.2.2. For the differences between the Professional and Community (free) Editions of PyCharm, take a look at the Professional vs. Community - Compare Editions guide.
By the end of this tutorial, you should be able to do the following in PyCharm:
The first step we need to do is to tell PyCharm how to connect to Docker. To do so, open PyCharm settings (PyCharm > Preferences
for Mac users or File > Settings
for Windows and Linux users), and then expand the "Build, Execution, Deployment" setting. Click "Docker" and then click the "+" button to create a new Docker configuration.
For Mac, select the Docker for Mac
options. Then apply the changes.
Now that we have the Docker configuration set up, it's time to configure Docker Compose as a remote interpreter. Assuming you have a project open, open the settings once again and expand the "Project: <your-project-name>
" setting and click "Python Interpreter". Click the gear icon and choose "Add".
In the next dialog, choose "Docker Compose" in the left pane, and select the Docker configuration you created in the previous steps in the "Server" field. The "Configuration file(s)" field should point to your Docker Compose file while the "Service" field should point to the web application service from your Docker Compose file.
For example, if your Docker Compose file looks like this, then you'll want to point to the web
service:
version: '3.7'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
ports:
- 8008:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
- POSTGRES_DB=hello_django_dev
volumes:
postgres_data:
The debugger attaches specifically to the web service. All other services in your Docker Compose file will start when we later run the configuration in PyCharm
Click "OK" to apply the changes.
Back in the "Python Interpreter" setting dialog you should now see that the project has the correct remote interpreter.
Close the settings.
Now that we've configured PyCharm to be able to connect to Docker and created a remote interpreter configuration based on the Docker Compose file, we can create a Run/Debug configuration.
Click on the "Add configuration..." button at the top of the PyCharm window.
Next click the "+" button and choose "Django server".
Give the configuration a name. The important thing in this configuration dialog is to set the "Host" field to 0.0.0.0
.
Click "OK" to save the configuration. We can now see the Run/Debug configuration at the top of the PyCharm window and that the buttons (for run, debug, etc.) are enabled.
If you now set breakpoints in your Django app and press the debug button next to the Run/Debug configuration, you can debug the Django app running inside the Docker container.
In this tutorial, we've shown you how to configure PyCharm for debugging a Django app running inside of Docker. With that, you can now not only debug your views and models and what not, but also set breakpoints and debug your template code.
TIP: Want to supercharge your debugging even more? PyCharm also lets you set conditional breakpoints!
Original article source at: https://testdriven.io/
1668065241
Hardhat is an Ethereum development environment for professionals. It facilitates performing frequent tasks, such as running tests, automatically checking code for mistakes or interacting with a smart contract. Check out the plugin list to use it with your existing tools.
To install Hardhat, go to an empty folder, initialize an npm
project (i.e. npm init
), and run
npm install --save-dev hardhat
Once it's installed, just run this command and follow its instructions:
npx hardhat
On Hardhat's website you will find:
Contributions are always welcome! Feel free to open any issue or send a pull request.
Go to CONTRIBUTING.md to learn about how to set up Hardhat's development environment.
Hardhat Support Discord server: for questions and feedback.
👷♀️👷♂️👷♀️👷♂️👷♀️👷♂️👷♀️👷♂️👷♀️👷♂️👷♀️👷♂️👷♀️👷♂️
Built by the Nomic Foundation for the Ethereum community.
Join our Hardhat Support Discord server to stay up to date on new releases, plugins and tutorials.
Author: NomicFoundation
Source Code: https://github.com/NomicFoundation/hardhat
License: View license
1667066940
Class for logging excessive blocking on the main thread. It watches the main thread and checks if it doesn’t get blocked for more than defined threshold.
👮 Main thread was blocked for 1.25s 👮
You can also inspect which part of your code is blocking the main thread.
Simply, just instantiate Watchdog with number of seconds that must pass to consider the main thread blocked. Additionally you can enable strictMode
that stops the execution whenever the threshold is reached. This way, you can inspect which part of your code is blocking the main thread.
let watchdog = Watchdog(threshold: 0.4, strictMode: true)
Don't forget to retain Watchdog somewhere or it will get released when it goes out of scope.
Add the following to your Cartfile:
github "wojteklu/Watchdog"
Then run carthage update
.
Follow the current instructions in Carthage's README for up to date installation instructions.
Add the following to your Podfile:
pod 'Watchdog'
You will also need to make sure you're opting into using frameworks:
use_frameworks!
Manually add the file into your Xcode project. Slightly simpler, but updates are also manual.
Author: Wojteklu
Source Code: https://github.com/wojteklu/Watchdog
License: MIT license
1667025900
ResponseDetective is a non-intrusive framework for intercepting any outgoing requests and incoming responses between your app and your server for debugging purposes.
ResponseDetective is written in Swift 5.3 and supports iOS 9.0+, macOS 10.10+ and tvOS 9.0+.
Incorporating ResponseDetective in your project is very simple – it all comes down to just two steps:
For ResponseDetective to work, it needs to be added as a middleman between your (NS)URLSession
and the Internet. You can do this by registering the provided URLProtocol
class in your session's (NS)URLSessionConfiguration.protocolClasses
, or use a shortcut method:
// Objective-C
NSURLSessionConfiguration *configuration = [NSURLSessionConfiguration defaultSessionConfiguration];
[RDTResponseDetective enableInConfiguration:configuration];
// Swift
let configuration = URLSessionConfiguration.default
ResponseDetective.enable(inConfiguration: configuration)
Then, you should use that configuration with your (NS)URLSession
:
// Objective-C
NSURLSession *session = [[NSURLSession alloc] initWithConfiguration:configuration];
// Swift
let session = URLSession(configuration: configuration)
Or, if you're using AFNetworking/Alamofire as your networking framework, integrating ResponseDetective comes down to just initializing your AFURLSessionManager
/Manager
with the above (NS)URLSessionConfiguration
:
// Objective-C (AFNetworking)
AFURLSessionManager *manager = [[AFURLSessionManager alloc] initWithSessionConfiguration:configuration];
// Swift (Alamofire)
let manager = Alamofire.SessionManager(configuration: configuration)
And that's all!
Now it's time to perform the actual request:
// Objective-C
NSURLRequest *request = [[NSURLRequest alloc] initWithURL:[NSURL URLWithString:@"http://httpbin.org/get"]];
[[session dataTaskWithRequest:request] resume];
// Swift
let request = URLRequest(URL: URL(string: "http://httpbin.org/get")!)
session.dataTask(with: request).resume()
Voilà! 🎉 Check out your console output:
<0x000000000badf00d> [REQUEST] GET https://httpbin.org/get
├─ Headers
├─ Body
│ <none>
<0x000000000badf00d> [RESPONSE] 200 (NO ERROR) https://httpbin.org/get
├─ Headers
│ Server: nginx
│ Date: Thu, 01 Jan 1970 00:00:00 GMT
│ Content-Type: application/json
├─ Body
│ {
│ "args" : {
│ },
│ "headers" : {
│ "User-Agent" : "ResponseDetective\/1 CFNetwork\/758.3.15 Darwin\/15.4.0",
│ "Accept-Encoding" : "gzip, deflate",
│ "Host" : "httpbin.org",
│ "Accept-Language" : "en-us",
│ "Accept" : "*\/*"
│ },
│ "url" : "https:\/\/httpbin.org\/get"
│ }
If you're using Carthage, add the following dependency to your Cartfile
:
github "netguru/ResponseDetective" ~> {version}
If you're using CocoaPods, add the following dependency to your Podfile
:
use_frameworks!
pod 'ResponseDetective', '~> {version}'
To install the test dependencies or to build ResponseDetective itself, do not run carthage
directly. It can't handle the Apple Silicon architectures introduced in Xcode 12. Instead, run it through the carthage.sh
script:
$ ./carthage.sh bootstrap
This project was made with ♡ by Netguru.
Starting from version 1.0.0, ResponseDetective's releases are named after Sherlock Holmes canon stories, in chronological order. What happens if we reach 60 releases and there are no more stories? We don't know, maybe we'll start naming them after cats or something.
Author: Netguru
Source Code: https://github.com/netguru/ResponseDetective
License: MIT license
1666756386
In this tutorial, I am going to show you how to debug C++ code, starting from the very basics and then demonstrating how a debugger like GDB can be used to help you track errors in CPU code.
I always tell my students, the debugger is your 'get out of jail free card' when working on a project. I say the same thing to professionals, debuggers are your 'get out of free jail card'. The reality is that programmers spend the majority of their time debugging as opposed to writing new code. Unfortunately many programmers do not learn how to use a debugger, or otherwise how they should approach debugging. In this talk I am going to show you how to debug C++ code, starting from the very basics and then demonstrating how a debugger like GDB can be used to help you track errors in CPU code. Attendees at this talk will learn names of debugging techniques (e.g. delta debugging), and I will demonstrate several debugging tools (stepping through code, capturing backtraces, conditional breakpoints, scripting, and even time traveling!) to demonstrate the power of debuggers. This is a beginner friendly talk where we are going to start from the beginning, but I suspect I may show a trick or two that folks with prior experience will appreciate.
#cplusplus #cpp #programming #debugging
1666293120
Webgrind is an Xdebug profiling web frontend in PHP. It implements a subset of the features of kcachegrind and installs in seconds and works on all platforms. For quick'n'dirty optimizations it does the job. Here's a screenshot showing the output from profiling:
Suggestions for improvements and new features are more than welcome - this is just a start.
Alternatively, on PHP 5.4+ run the application using the PHP built-in server with the command composer serve
or php -S 0.0.0.0:8080 index.php
if you are not using Composer.
For faster preprocessing, give write access to the bin
subdirectory, or compile manually:
make
in the unzipped folder (requires GCC or Clang.)nmake -f NMakeFile
in the unzipped folder (requires Visual Studio 2015 or higher.)See the Installation Wiki page for more.
Instead of uploading webgrind to a web server or starting a local one, you can use the official Docker image to quickly inspect existing xDebug profiling files. To use the Docker image, run the following command with /path/to/xdebug/files
replaced by the actual path of your profiling files.
docker run --rm -v /path/to/xdebug/files:/tmp -p 80:80 jokkedk/webgrind:latest
Now open http://localhost
in your browser. After using webgrind you can stop the Docker container by pressing CTRL / Strg
+ C
.
To use the built-in file viewer, mount the appropriate files under /host
in the container.
Webgrind is written by Joakim Nygård and Jacob Oettinger. It would not have been possible without the great tool that Xdebug is thanks to Derick Rethans.
Current maintainer is Micah Ng.
Author: jokkedk
Source Code: https://github.com/jokkedk/webgrind
License: View license
1666269240
XCGLogger is the original debug log module for use in Swift projects.
Swift does not include a C preprocessor so developers are unable to use the debug log #define
macros they would use in Objective-C. This means our traditional way of generating nice debug logs no longer works. Resorting to just plain old print
calls means you lose a lot of helpful information, or requires you to type a lot more code.
XCGLogger allows you to log details to the console (and optionally a file, or other custom destinations), just like you would have with NSLog()
or print()
, but with additional information, such as the date, function name, filename and line number.
Go from this:
Simple message
to this:
2014-06-09 06:44:43.600 [Debug] [AppDelegate.swift:40] application(_:didFinishLaunchingWithOptions:): Simple message
Execute:
git submodule add https://github.com/DaveWoodCom/XCGLogger.git
in your repository folder.
Add the following line to your Cartfile
.
github "DaveWoodCom/XCGLogger" ~> 7.0.1
Then run carthage update --no-use-binaries
or just carthage update
. For details of the installation and usage of Carthage, visit its project page.
Developers running 5.0 and above in Swift will need to add $(SRCROOT)/Carthage/Build/iOS/ObjcExceptionBridging.framework
to their Input Files in the Copy Carthage Frameworks Build Phase.
Add something similar to the following lines to your Podfile
. You may need to adjust based on your platform, version/branch etc.
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '8.0'
use_frameworks!
pod 'XCGLogger', '~> 7.0.1'
Specifying the pod XCGLogger
on its own will include the core framework. We're starting to add subspecs to allow you to include optional components as well:
pod 'XCGLogger/UserInfoHelpers', '~> 7.0.1'
: Include some experimental code to help deal with using UserInfo dictionaries to tag log messages.
Then run pod install
. For details of the installation and usage of CocoaPods, visit its official web site.
Note: Before CocoaPods 1.4.0 it was not possible to use multiple pods with a mixture of Swift versions. You may need to ensure each pod is configured for the correct Swift version (check the targets in the pod project of your workspace). If you manually adjust the Swift version for a project, it'll reset the next time you run pod install
. You can add a post_install
hook into your podfile to automate setting the correct Swift versions. This is largely untested, and I'm not sure it's a good solution, but it seems to work:
post_install do |installer|
installer.pods_project.targets.each do |target|
if ['SomeTarget-iOS', 'SomeTarget-watchOS'].include? "#{target}"
print "Setting #{target}'s SWIFT_VERSION to 4.2\n"
target.build_configurations.each do |config|
config.build_settings['SWIFT_VERSION'] = '4.2'
end
else
print "Setting #{target}'s SWIFT_VERSION to Undefined (Xcode will automatically resolve)\n"
target.build_configurations.each do |config|
config.build_settings.delete('SWIFT_VERSION')
end
end
end
print "Setting the default SWIFT_VERSION to 3.2\n"
installer.pods_project.build_configurations.each do |config|
config.build_settings['SWIFT_VERSION'] = '3.2'
end
end
You can adjust that to suit your needs of course.
Add the following entry to your package's dependencies:
.Package(url: "https://github.com/DaveWoodCom/XCGLogger.git", majorVersion: 7)
Use:
This quick start method is intended just to get you up and running with the logger. You should however use the advanced usage below to get the most out of this library.
Add the XCGLogger project as a subproject to your project, and add the appropriate library as a dependency of your target(s). Under the General
tab of your target, add XCGLogger.framework
and ObjcExceptionBridging.framework
to the Embedded Binaries
section.
Then, in each source file:
import XCGLogger
In your AppDelegate (or other global file), declare a global constant to the default XCGLogger instance.
let log = XCGLogger.default
In the
application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]? = nil) // iOS, tvOS
or
applicationDidFinishLaunching(_ notification: Notification) // macOS
function, configure the options you need:
log.setup(level: .debug, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true, writeToFile: "path/to/file", fileLevel: .debug)
The value for writeToFile:
can be a String
or URL
. If the file already exists, it will be cleared before we use it. Omit the parameter or set it to nil
to log to the console only. You can optionally set a different log level for the file output using the fileLevel:
parameter. Set it to nil
or omit it to use the same log level as the console.
Then, whenever you'd like to log something, use one of the convenience methods:
log.verbose("A verbose message, usually useful when working on a specific problem")
log.debug("A debug message")
log.info("An info message, probably useful to power users looking in console.app")
log.notice("A notice message")
log.warning("A warning message, may indicate a possible error")
log.error("An error occurred, but it's recoverable, just info about what happened")
log.severe("A severe error occurred, we are likely about to crash now")
log.alert("An alert error occurred, a log destination could be made to email someone")
log.emergency("An emergency error occurred, a log destination could be made to text someone")
The different methods set the log level of the message. XCGLogger will only print messages with a log level that is greater to or equal to its current log level setting. So a logger with a level of .error
will only output log messages with a level of .error
, .severe
, .alert
, or .emergency
.
XCGLogger aims to be simple to use and get you up and running quickly with as few as 2 lines of code above. But it allows for much greater control and flexibility.
A logger can be configured to deliver log messages to a variety of destinations. Using the basic setup above, the logger will output log messages to the standard Xcode debug console, and optionally a file if a path is provided. It's quite likely you'll want to send logs to more interesting places, such as the Apple System Console, a database, third party server, or another application such as NSLogger. This is accomplished by adding the destination to the logger.
Here's an example of configuring the logger to output to the Apple System Log as well as a file.
// Create a logger object with no destinations
let log = XCGLogger(identifier: "advancedLogger", includeDefaultDestinations: false)
// Create a destination for the system console log (via NSLog)
let systemDestination = AppleSystemLogDestination(identifier: "advancedLogger.systemDestination")
// Optionally set some configuration options
systemDestination.outputLevel = .debug
systemDestination.showLogIdentifier = false
systemDestination.showFunctionName = true
systemDestination.showThreadName = true
systemDestination.showLevel = true
systemDestination.showFileName = true
systemDestination.showLineNumber = true
systemDestination.showDate = true
// Add the destination to the logger
log.add(destination: systemDestination)
// Create a file log destination
let fileDestination = FileDestination(writeToFile: "/path/to/file", identifier: "advancedLogger.fileDestination")
// Optionally set some configuration options
fileDestination.outputLevel = .debug
fileDestination.showLogIdentifier = false
fileDestination.showFunctionName = true
fileDestination.showThreadName = true
fileDestination.showLevel = true
fileDestination.showFileName = true
fileDestination.showLineNumber = true
fileDestination.showDate = true
// Process this destination in the background
fileDestination.logQueue = XCGLogger.logQueue
// Add the destination to the logger
log.add(destination: fileDestination)
// Add basic app info, version info etc, to the start of the logs
log.logAppDetails()
You can configure each log destination with different options depending on your needs.
Another common usage pattern is to have multiple loggers, perhaps one for UI issues, one for networking, and another for data issues.
Each log destination can have its own log level. As a convenience, you can set the log level on the log object itself and it will pass that level to each destination. Then set the destinations that need to be different.
Note: A destination object can only be added to one logger object, adding it to a second will remove it from the first.
Alternatively you can use a closure to initialize your global variable, so that all initialization is done in one place
let log: XCGLogger = {
let log = XCGLogger(identifier: "advancedLogger", includeDefaultDestinations: false)
// Customize as needed
return log
}()
Note: This creates the log object lazily, which means it's not created until it's actually needed. This delays the initial output of the app information details. Because of this, I recommend forcing the log object to be created at app launch by adding the line let _ = log
at the top of your didFinishLaunching
method if you don't already log something on app launch.
You can log strings:
log.debug("Hi there!")
or pretty much anything you want:
log.debug(true)
log.debug(CGPoint(x: 1.1, y: 2.2))
log.debug(MyEnum.Option)
log.debug((4, 2))
log.debug(["Device": "iPhone", "Version": 7])
New to XCGLogger 4, you can now create filters to apply to your logger (or to specific destinations). Create and configure your filters (examples below), and then add them to the logger or destination objects by setting the optional filters
property to an array containing the filters. Filters are applied in the order they exist in the array. During processing, each filter is asked if the log message should be excluded from the log. If any filter excludes the log message, it's excluded. Filters have no way to reverse the exclusion of another filter.
If a destination's filters
property is nil
, the log's filters
property is used instead. To have one destination log everything, while having all other destinations filter something, add the filters to the log object and set the one destination's filters
property to an empty array []
.
Note: Unlike destinations, you can add the same filter object to multiple loggers and/or multiple destinations.
To exclude all log messages from a specific file, create an exclusion filter like so:
log.filters = [FileNameFilter(excludeFrom: ["AppDelegate.swift"], excludePathWhenMatching: true)]
excludeFrom:
takes an Array<String>
or Set<String>
so you can specify multiple files at the same time.
excludePathWhenMatching:
defaults to true
so you can omit it unless you want to match path's as well.
To include log messages only for a specific set to files, create the filter using the includeFrom:
initializer. It's also possible to just toggle the inverse
property to flip the exclusion filter to an inclusion filter.
In order to filter log messages by tag, you must of course be able to set a tag on the log messages. Each log message can now have additional, user defined data attached to them, to be used by filters (and/or formatters etc). This is handled with a userInfo: Dictionary<String, Any>
object. The dictionary key should be a namespaced string to avoid collisions with future additions. Official keys will begin with com.cerebralgardens.xcglogger
. The tag key can be accessed by XCGLogger.Constants.userInfoKeyTags
. You definitely don't want to be typing that, so feel free to create a global shortcut: let tags = XCGLogger.Constants.userInfoKeyTags
. Now you can easily tag your logs:
let sensitiveTag = "Sensitive"
log.debug("A tagged log message", userInfo: [tags: sensitiveTag])
The value for tags can be an Array<String>
, Set<String>
, or just a String
, depending on your needs. They'll all work the same way when filtered.
Depending on your workflow and usage, you'll probably create faster methods to set up the userInfo
dictionary. See below for other possible shortcuts.
Now that you have your logs tagged, you can filter easily:
log.filters = [TagFilter(excludeFrom: [sensitiveTag])]
Just like the FileNameFilter
, you can use includeFrom:
or toggle inverse
to include only log messages that have the specified tags.
Filtering by developer is exactly like filtering by tag, only using the userInfo
key of XCGLogger.Constants.userInfoKeyDevs
. In fact, both filters are subclasses of the UserInfoFilter
class that you can use to create additional filters. See Extending XCGLogger below.
In large projects with multiple developers, you'll probably want to start tagging log messages, as well as indicate the developer that added the message.
While extremely flexible, the userInfo
dictionary can be a little cumbersome to use. There are a few possible methods you can use to simply things. I'm still testing these out myself so they're not officially part of the library yet (I'd love feedback or other suggestions).
I have created some experimental code to help create the UserInfo dictionaries. (Include the optional UserInfoHelpers
subspec if using CocoaPods). Check the iOS Demo app to see it in use.
There are two structs that conform to the UserInfoTaggingProtocol
protocol. Tag
and Dev
.
You can create an extension on each of these that suit your project. For example:
extension Tag {
static let sensitive = Tag("sensitive")
static let ui = Tag("ui")
static let data = Tag("data")
}
extension Dev {
static let dave = Dev("dave")
static let sabby = Dev("sabby")
}
Along with these types, there's an overloaded operator |
that can be used to merge them together into a dictionary compatible with the UserInfo:
parameter of the logging calls.
Then you can log messages like this:
log.debug("A tagged log message", userInfo: Dev.dave | Tag.sensitive)
There are some current issues I see with these UserInfoHelpers
, which is why I've made it optional/experimental for now. I'd love to hear comments/suggestions for improvements.
|
merges dictionaries so long as there are no Set
s. If one of the dictionaries contains a Set
, it'll use one of them, without merging them. Preferring the left hand side if both sides have a set for the same key.userInfo:
parameter needs a dictionary, you can't pass in a single Dev or Tag object. You need to use at least two with the |
operator to have it automatically convert to a compatible dictionary. If you only want one Tag for example, you must access the .dictionary
parameter manually: userInfo: Tag("Blah").dictionary
.All log methods operate on closures. Using the same syntactic sugar as Swift's assert()
function, this approach ensures we don't waste resources building log messages that won't be output anyway, while at the same time preserving a clean call site.
For example, the following log statement won't waste resources if the debug log level is suppressed:
log.debug("The description of \(thisObject) is really expensive to create")
Similarly, let's say you have to iterate through a loop in order to do some calculation before logging the result. In Objective-C, you could put that code block between #if
#endif
, and prevent the code from running. But in Swift, previously you would need to still process that loop, wasting resources. With XCGLogger
it's as simple as:
log.debug {
var total = 0.0
for receipt in receipts {
total += receipt.total
}
return "Total of all receipts: \(total)"
}
In cases where you wish to selectively execute code without generating a log line, return nil
, or use one of the methods: verboseExec
, debugExec
, infoExec
, warningExec
, errorExec
, and severeExec
.
You can create your own DateFormatter
object and assign it to the logger.
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "MM/dd/yyyy hh:mma"
dateFormatter.locale = Locale.current
log.dateFormatter = dateFormatter
XCGLogger supports adding formatting codes to your log messages to enable colour in various places. The original option was to use the XcodeColors plug-in. However, Xcode (as of version 8) no longer officially supports plug-ins. You can still view your logs in colour, just not in Xcode at the moment. You can use the ANSI colour support to add colour to your fileDestination objects and view your logs via a terminal window. This gives you some extra options such as adding Bold, Italics, or (please don't) Blinking!
Once enabled, each log level can have its own colour. These colours can be customized as desired. If using multiple loggers, you could alternatively set each logger to its own colour.
An example of setting up the ANSI formatter:
if let fileDestination: FileDestination = log.destination(withIdentifier: XCGLogger.Constants.fileDestinationIdentifier) as? FileDestination {
let ansiColorLogFormatter: ANSIColorLogFormatter = ANSIColorLogFormatter()
ansiColorLogFormatter.colorize(level: .verbose, with: .colorIndex(number: 244), options: [.faint])
ansiColorLogFormatter.colorize(level: .debug, with: .black)
ansiColorLogFormatter.colorize(level: .info, with: .blue, options: [.underline])
ansiColorLogFormatter.colorize(level: .notice, with: .green, options: [.italic])
ansiColorLogFormatter.colorize(level: .warning, with: .red, options: [.faint])
ansiColorLogFormatter.colorize(level: .error, with: .red, options: [.bold])
ansiColorLogFormatter.colorize(level: .severe, with: .white, on: .red)
ansiColorLogFormatter.colorize(level: .alert, with: .white, on: .red, options: [.bold])
ansiColorLogFormatter.colorize(level: .emergency, with: .white, on: .red, options: [.bold, .blink])
fileDestination.formatters = [ansiColorLogFormatter]
}
As with filters, you can use the same formatter objects for multiple loggers and/or multiple destinations. If a destination's formatters
property is nil
, the logger's formatters
property will be used instead.
See Extending XCGLogger below for info on creating your own custom formatters.
By using Swift build flags, different log levels can be used in debugging versus staging/production. Go to Build Settings -> Swift Compiler - Custom Flags -> Other Swift Flags and add -DDEBUG
to the Debug entry.
#if DEBUG
log.setup(level: .debug, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true)
#else
log.setup(level: .severe, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true)
#endif
You can set any number of options up in a similar fashion. See the updated iOSDemo app for an example of using different log destinations based on options, search for USE_NSLOG
.
By default, the supplied log destinations will process the logs on the thread they're called on. This is to ensure the log message is displayed immediately when debugging an application. You can add a breakpoint immediately after a log call and see the results when the breakpoint hits.
However, if you're not actively debugging the application, processing the logs on the current thread can introduce a performance hit. You can now specify a destination process its logs on a dispatch queue of your choice (or even use a default supplied one).
fileDestination.logQueue = XCGLogger.logQueue
or even
fileDestination.logQueue = DispatchQueue.global(qos: .background)
This works extremely well when combined with the Alternate Configurations method above.
#if DEBUG
log.setup(level: .debug, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true)
#else
log.setup(level: .severe, showThreadName: true, showLevel: true, showFileNames: true, showLineNumbers: true)
if let consoleLog = log.logDestination(XCGLogger.Constants.baseConsoleDestinationIdentifier) as? ConsoleDestination {
consoleLog.logQueue = XCGLogger.logQueue
}
#endif
When using the advanced configuration of the logger (see Advanced Usage above), you can now specify that the logger append to an existing log file, instead of automatically overwriting it.
Add the optional shouldAppend:
parameter when initializing the FileDestination
object. You can also add the appendMarker:
parameter to add a marker to the log file indicating where a new instance of your app started appending. By default we'll add -- ** ** ** --
if the parameter is omitted. Set it to nil
to skip appending the marker.
let fileDestination = FileDestination(writeToFile: "/path/to/file", identifier: "advancedLogger.fileDestination", shouldAppend: true, appendMarker: "-- Relauched App --")
When logging to a file, you have the option to automatically rotate the log file to an archived destination, and have the logger automatically create a new log file in place of the old one.
Create a destination using the AutoRotatingFileDestination
class and set the following properties:
targetMaxFileSize
: Auto rotate once the file is larger than this
targetMaxTimeInterval
: Auto rotate after this many seconds
targetMaxLogFiles
: Number of archived log files to keep, older ones are automatically deleted
Those are all guidelines for the logger, not hard limits.
You can create alternate log destinations (besides the built in ones). Your custom log destination must implement the DestinationProtocol
protocol. Instantiate your object, configure it, and then add it to the XCGLogger
object with add(destination:)
. There are two base destination classes (BaseDestination
and BaseQueuedDestination
) you can inherit from to handle most of the process for you, requiring you to only implement one additional method in your custom class. Take a look at ConsoleDestination
and FileDestination
for examples.
You can also create custom filters or formatters. Take a look at the provided versions as a starting point. Note that filters and formatters have the ability to alter the log messages as they're processed. This means you can create a filter that strips passwords, highlights specific words, encrypts messages, etc.
XCGLogger is the best logger available for Swift because of the contributions from the community like you. There are many ways you can help continue to make it great.
Note: when submitting a pull request, please use lots of small commits verses one huge commit. It makes it much easier to merge in when there are several pull requests that need to be combined for a new version.
If you find this library helpful, you'll definitely find this other tool helpful:
Watchdog: https://watchdogforxcode.com/
Also, please check out some of my other projects:
The change log is now in its own file: CHANGELOG.md
Author: DaveWoodCom
Source Code: https://github.com/DaveWoodCom/XCGLogger
License: MIT license
1665784860
Clockwork is a development tool for PHP available right in your browser. Clockwork gives you an insight into your application runtime - including request data, performance metrics, log entries, database queries, cache queries, redis commands, dispatched events, queued jobs, rendered views and more - for HTTP requests, commands, queue jobs and tests.
This repository contains the server-side component of Clockwork.
Check out on the Clockwork website for details.
Install the Clockwork library via Composer.
$ composer require itsgoingd/clockwork
Congratulations, you are done! To enable more features like commands or queue jobs profiling, publish the configuration file via the vendor:publish
Artisan command.
Note: If you are using the Laravel route cache, you will need to refresh it using the route:cache Artisan command.
Read full installation instructions on the Clockwork website.
The Clockwork server-side component collects and stores data about your application.
Clockwork is only active when your app is in debug mode by default. You can choose to explicitly enable or disable Clockwork, or even set Clockwork to always collect data without exposing them for further analysis.
We collect a whole bunch of useful data by default, but you can enable more features or disable features you don't need in the config file.
Some features might allow for advanced options, eg. for database queries you can set a slow query threshold or enable detecting of duplicate (N+1) queries. Check out the config file to see all what Clockwork can do.
There are several options that allow you to choose for which requests Clockwork is active.
On-demand mode will collect data only when Clockwork app is open. You can even specify a secret to be set in the app settings to collect request. Errors only will record only requests ending with 4xx and 5xx responses. Slow only will collect only requests with responses above the set slow threshold. You can also filter the collected and recorded requests by a custom closure. CORS pre-flight requests will not be collected by default.
New in Clockwork 4.1, artisan commands, queue jobs and tests can now also be collected, you need to enable this in the config file.
Clockwork also collects stack traces for data like log messages or database queries. Last 10 frames of the trace are collected by default. You can change the frames limit or disable this feature in the configuration file.
Web interface
Open your.app/clockwork
to view and interact with the collected data.
The app will show all executed requests, which is useful when the request is not made by browser, but for example a mobile application you are developing an API for.
Browser extension
A browser dev tools extension is also available for Chrome and Firefox:
Toolbar
Clockwork now gives you an option to show basic request information in the form of a toolbar in your app.
The toolbar is fully rendered client-side and requires installing a tiny javascript library.
Learn more on the Clockwork website.
You can log any variable via the clock() helper, from a simple string to an array or object, even multiple values:
clock(User::first(), auth()->user(), $username)
The clock()
helper function returns it's first argument, so you can easily add inline debugging statements to your code:
User::create(clock($request->all()))
If you want to specify a log level, you can use the long-form call:
clock()->info("User {$username} logged in!")
Timeline gives you a visual representation of your application runtime.
To add an event to the timeline - start it with a description, execute the tracked code and finish the event. A fluent api is available to further configure the event.
// using timeline api with begin/end and fluent configuration
clock()->event('Importing tweets')->color('purple')->begin();
...
clock()->event('Importing tweets')->end();
Alternatively you can execute the tracked code block as a closure. You can also choose to use an array based configuration instead of the fluent api.
// using timeline api with run and array-based configuration
clock()->event('Updating cache', [ 'color' => 'green' ])->run(function () {
...
});
Read more about available features on the Clockwork website.
Author: itsgoingd
Source Code: https://github.com/itsgoingd/clockwork
License: MIT license
1663747588
In today's post we will learn about 9 Favorite PHP Libraries for Debugging and Profiling.
What is Debugging and Profiling?
Debugging - Getting the code to work as you intended; profiling - Assessing how the code carries out a given scientific task on a given platform and how its performance might be improved; validation - Assessing how accurately the code carries out a given scientific task.
Table of contents:
Another web debugging console using Google Chrome.
PHP Console allows you to handle PHP errors & exceptions, dump variables, execute PHP code remotely and many other things using Google Chrome extension PHP Console and PhpConsole server library.
{
"require": {
"php-console/php-console": "^3.1"
}
}
Or
$ composer require php-console/php-console
Usage
You can try most of PHP Console features on live demo server.
There is a PhpConsole\Connector class that initializes connection between PHP server and Google Chrome extension. Connection is initialized when PhpConsole\Connector instance is initialized:
$connector = PhpConsole\Connector::getInstance();
Also it will be initialized when you call PhpConsole\Handler::getInstance()
or PhpConsole\Helper::register()
.
PHP Console uses headers to communicate with client, so PhpConsole\Connector::getInstance()
or PhpConsole\Handler::getInstance()
must be called before any output. If headers are sent before script shut down or PHP Console response package size is out of web-server headers size limit, then PHP Console will store response data in PhpConsole\Storage implementation and send it to client in STDOUT, in additional HTTP request. So there is no limit in PHP Console response package size.
By default PHP Console uses PhpConsole\Storage\Session for postponed responses, so all temporary data will be stored in $_SESSION
. But there is some problem with frameworks like Symfony and Laravel that overrides PHP session handler. In this case you should use any other PhpConsole\Storage implementation like:
// Can be called only before PhpConsole\Connector::getInstance() and PhpConsole\Handler::getInstance()
PhpConsole\Connector::setPostponeStorage(new PhpConsole\Storage\File('/tmp/pc.data'));
See all available PhpConsole\Storage implementations in /src/PhpConsole/Storage.
If you want to see errors sources and traces paths more short, call:
$connector->setSourcesBasePath('/path/to/project');
So paths like /path/to/project/module/file.php
will be displayed on client as /module/file.php
.
If your internal server encoding is not UTF-8, so you need to call:
$connector->setServerEncoding('CP1251');
PhpConsole server library is optimized for lazy initialization only for clients that have Google Chrome extension PHP Console installed. There is example of correct initialization PhpConsole on your production server.
A debugging and profiling tool.
At first glance Kint is just a pretty replacement for var_dump(), print_r() and debug_backtrace().
However, it's much, much more than that. You will eventually wonder how you developed without it.
One of the main goals of Kint is to be zero setup.
Download the file and simply
<?php
require 'kint.phar';
composer require kint-php/kint --dev
<?php
Kint::dump($GLOBALS, $_SERVER); // pass any number of parameters
d($GLOBALS, $_SERVER); // or simply use d() as a shorthand
Kint::trace(); // Debug backtrace
s($GLOBALS); // Basic output mode
~d($GLOBALS); // Text only output mode
Kint::$enabled_mode = false; // Disable kint
d('Get off my lawn!'); // Debugs no longer have any effect
A simple metrics API library.
Simple library that abstracts different metrics collectors. I find this necessary to have a consistent and simple metrics API that doesn't cause vendor lock-in.
Using Composer:
composer require beberlei/metrics
You can instantiate clients:
<?php
$collector = \Beberlei\Metrics\Factory::create('statsd');
You can measure stats:
<?php
$collector->increment('foo.bar');
$collector->decrement('foo.bar');
$start = microtime(true);
$diff = microtime(true) - $start;
$collector->timing('foo.bar', $diff);
$value = 1234;
$collector->measure('foo.bar', $value);
Some backends defer sending and aggregate all information, make sure to call flush:
<?php
$collector->flush();
<?php
$statsd = \Beberlei\Metrics\Factory::create('statsd');
$zabbix = \Beberlei\Metrics\Factory::create('zabbix', array(
'hostname' => 'foo.beberlei.de',
'server' => 'localhost',
'port' => 10051,
));
$zabbixConfig = \Beberlei\Metrics\Factory::create('zabbix_file', array(
'hostname' => 'foo.beberlei.de',
'file' => '/etc/zabbix/zabbix_agentd.conf'
));
$librato = \Beberlei\Metrics\Factory::create('librato', array(
'hostname' => 'foo.beberlei.de',
'username' => 'foo',
'password' => 'bar',
));
$null = \Beberlei\Metrics\Factory::create('null');
A self contained code coverage compatible driver.
API
/**
* Shall start recording coverage information
*/
function \pcov\start() : void;
/**
* Shall stop recording coverage information
*/
function \pcov\stop() : void;
/**
* Shall collect coverage information
*
* @param integer $type define witch type of information should be collected
* \pcov\all shall collect coverage information for all files
* \pcov\inclusive shall collect coverage information for the specified files
* \pcov\exclusive shall collect coverage information for all but the specified files
* @param array $filter path of files (realpath) that should be filtered
*
* @return array
*/
function \pcov\collect(int $type = \pcov\all, array $filter = []) : array;
/**
* Shall clear stored information
*
* @param bool $files set true to clear file tables
*
* Note: clearing the file tables may have surprising consequences
*/
function \pcov\clear(bool $files = false) : void;
/**
* Shall return list of files waiting to be collected
*/
function \pcov\waiting() : array;
/**
* Shall return the current size of the trace and cfg arena
*/
function \pcov\memory() : int;
Configuration
PCOV is configured using PHP.ini:
Option | Default | Changeable | Description |
---|---|---|---|
pcov.enabled | 1 | SYSTEM | enable or disable zend hooks for pcov |
pcov.directory | auto | SYSTEM,PERDIR | restrict collection to files under this path |
pcov.exclude | unused | SYSTEM,PERDIR | exclude files under pcov.directory matching this PCRE |
pcov.initial.memory | 65536 | SYSTEM,PERDIR | shall set initial size of arena |
pcov.initial.files | 64 | SYSTEM,PERDIR | shall set initial size of tables |
The recommended defaults for production should be:
pcov.enabled = 0
The recommended defaults for development should be:
pcov.enabled = 1
pcov.directory = /path/to/your/source/directory
When pcov.directory
is left unset, PCOV will attempt to find src
, lib
or, app
in the current working directory, in that order; If none are found the current directory will be used, which may waste resources storing coverage information for the test suite.
If pcov.directory
contains test code, it's recommended to set pcov.exclude
to avoid wasting resources.
To avoid unnecessary allocation of additional arenas for traces and control flow graphs, pcov.initial.memory
should be set according to the memory required by the test suite, which may be discovered with \pcov\memory()
.
To avoid reallocation of tables, pcov.initial.files
should be set to a number higher than the number of files that will be loaded during testing, inclusive of test files.
Note that arenas are allocated in chunks: If the chunk size is set to 65536 and pcov require 65537 bytes, the system will allocate two chunks, each 65536 bytes. When setting arena space therefore, be generous in your estimates.
A web debugging console.
Creating a test file or using php's interactive mode can be a bit cumbersome to try random php snippets. This allows you to run small bits of code easily right from your browser.
It is secure since accessible only from the local host, and very easy to setup and use.
Clone the git repo or download it as a zip/tarball, drop it somewhere in your local web document root and access it with http://localhost/path/to/php-console
You can also install it with Composer using this command:
composer create-project --stability=dev --keep-vcs seld/php-console
To update it just run git pull
in the directory to pull the latest changes in.
You can use the internal PHP server too.
run php -S localhost:1337
in a terminal and go to http://localhost:1337/
.
Default settings are available in config.php.dist
, if you would like to modify them, you can copy the file to config.php
and edit settings.
Code contributions or ideas are obviously much welcome. Send pull requests or issues on github.
bootstrap
option to be include before source evaluationA benchmarking Framework.
PHPBench is a benchmark runner for PHP analogous to PHPUnit but for performance rather than correctness.
Features include:
composer require phpbench/phpbench --dev
See the installation instructions for more options.
Running benchmarks and comparing against a baseline:
Aggregated report:
Blinken logger:
A low-overhead sampling profiler.
phpspy is a low-overhead sampling profiler for PHP. It works with non-ZTS PHP 7.0+ with CLI, Apache, and FPM SAPIs on 64-bit Linux 3.2+.
$ git clone https://github.com/adsr/phpspy.git
Cloning into 'phpspy'...
...
$ cd phpspy
$ make
...
$ sudo ./phpspy --limit=1000 --pid=$(pgrep -n httpd) >traces
...
$ ./stackcollapse-phpspy.pl <traces | ./vendor/flamegraph.pl >flame.svg
$ google-chrome flame.svg # View flame.svg in browser
$ make # Use built-in structs
$ # or
$ USE_ZEND=1 make ... # Use Zend structs (requires PHP development headers)
$ ./phpspy -h
Usage:
phpspy [options] -p <pid>
phpspy [options] -P <pgrep-args>
phpspy [options] [--] <cmd>
Options:
-h, --help Show this help
-p, --pid=<pid> Trace PHP process at `pid`
-P, --pgrep=<args> Concurrently trace processes that
match pgrep `args` (see also `-T`)
-T, --threads=<num> Set number of threads to use with `-P`
(default: 16)
-s, --sleep-ns=<ns> Sleep `ns` nanoseconds between traces
(see also `-H`) (default: 10101010)
-H, --rate-hz=<hz> Trace `hz` times per second
(see also `-s`) (default: 99)
-V, --php-version=<ver> Set PHP version
(default: auto;
supported: 70 71 72 73 74 80 81 82)
-l, --limit=<num> Limit total number of traces to capture
(approximate limit in pgrep mode)
(default: 0; 0=unlimited)
-i, --time-limit-ms=<ms> Stop tracing after `ms` milliseconds
(second granularity in pgrep mode)
(default: 0; 0=unlimited)
-n, --max-depth=<max> Set max stack trace depth
(default: -1; -1=unlimited)
-r, --request-info=<opts> Set request info parts to capture
(q=query c=cookie u=uri p=path
capital=negation)
(default: QCUP; none)
-m, --memory-usage Capture peak and current memory usage
with each trace (requires target PHP
process to have debug symbols)
-o, --output=<path> Write phpspy output to `path`
(default: -; -=stdout)
-O, --child-stdout=<path> Write child stdout to `path`
(default: phpspy.%d.out)
-E, --child-stderr=<path> Write child stderr to `path`
(default: phpspy.%d.err)
-x, --addr-executor-globals=<hex> Set address of executor_globals in hex
(default: 0; 0=find dynamically)
-a, --addr-sapi-globals=<hex> Set address of sapi_globals in hex
(default: 0; 0=find dynamically)
-1, --single-line Output in single-line mode
-b, --buffer-size=<size> Set output buffer size to `size`.
Note: In `-P` mode, setting this
above PIPE_BUF (4096) may lead to
interlaced writes across threads
unless `-J m` is specified.
(default: 4096)
-f, --filter=<regex> Filter output by POSIX regex
(default: none)
-F, --filter-negate=<regex> Same as `-f` except negated
-d, --verbose-fields=<opts> Set verbose output fields
(p=pid t=timestamp
capital=negation)
(default: PT; none)
-c, --continue-on-error Attempt to continue tracing after
encountering an error
-#, --comment=<any> Ignored; intended for self-documenting
commands
-@, --nothing Ignored
-v, --version Print phpspy version and exit
Experimental options:
-j, --event-handler=<handler> Set event handler (fout, callgrind)
(default: fout)
-J, --event-handler-opts=<opts> Set event handler options
(fout: m=use mutex to prevent
interlaced writes on stdout in `-P`
mode)
-S, --pause-process Pause process while reading stacktrace
(unsafe for production!)
-e, --peek-var=<varspec> Peek at the contents of the var located
at `varspec`, which has the format:
<varname>@<path>:<lineno>
<varname>@<path>:<start>-<end>
e.g., xyz@/path/to.php:10-20
-g, --peek-global=<glospec> Peek at the contents of a global var
located at `glospec`, which has
the format: <global>.<key>
where <global> is one of:
post|get|cookie|server|files|globals
e.g., server.REQUEST_TIME
-t, --top Show dynamic top-like output
A simple error detection, logging and time measuring library.
Tracy library is a useful helper for everyday PHP programmers. It helps you to:
PHP is a perfect language for making hardly detectable errors because it gives great flexibility to programmers. Tracy\Debugger is more valuable because of that. It is an ultimate tool among the diagnostic ones. If you are meeting Tracy for the first time, believe me, your life starts to be divided into one before the Tracy and the one with her. Welcome to the good part!
The recommended way to is via Composer:
composer require tracy/tracy
Alternatively, you can download the whole package or tracy.phar file.
Tracy | compatible with PHP | compatible with browsers |
---|---|---|
Tracy 3.0 | PHP 8.0 – 8.2 | Chrome 64+, Firefox 69+, Safari 15.4+ and iOS Safari 15.4+ |
Tracy 2.9 | PHP 7.2 – 8.2 | Chrome 64+, Firefox 69+, Safari 13.1+ and iOS Safari 13.4+ |
Tracy 2.8 | PHP 7.2 – 8.1 | Chrome 55+, Firefox 53+, Safari 11+ and iOS Safari 11+ |
Tracy 2.7 | PHP 7.1 – 8.0 | Chrome 55+, Firefox 53+, MS Edge 16+, Safari 11+ and iOS Safari 11+ |
Tracy 2.6 | PHP 7.1 – 8.0 | Chrome 49+, Firefox 45+, MS Edge 14+, Safari 10+ and iOS Safari 10.2+ |
Tracy 2.5 | PHP 5.4 – 7.4 | Chrome 49+, Firefox 45+, MS Edge 12+, Safari 10+ and iOS Safari 10.2+ |
Tracy 2.4 | PHP 5.4 – 7.2 | Chrome 29+, Firefox 28+, IE 11+ (except AJAX), MS Edge 12+, Safari 9+ and iOS Safari 9.2+ |
Activating Tracy is easy. Simply add these two lines of code, preferably just after library loading (like require 'vendor/autoload.php'
) and before any output is sent to browser:
use Tracy\Debugger;
Debugger::enable();
The first thing you will notice on the website is a Debugger Bar.
(If you do not see anything, it means that Tracy is running in production mode. For security reasons, Tracy is visible only on localhost. You may force Tracy to run in development mode by passing the Debugger::DEVELOPMENT
as the first parameter of enable()
method.)
The enable()
involves changing the error reporting level to E_ALL.
The Debugger Bar is a floating panel. It is displayed in the bottom right corner of a page. You can move it using the mouse. It will remember its position after the page reloading.
You can add other useful panels to the Debugger Bar. You can find interesting ones in addons or you can create your own.
If you do not want to show Debugger Bar, set:
Debugger::$showBar = false;
Surely, you know how PHP reports errors: there is something like this in the page source code:
<b>Parse error</b>: syntax error, unexpected '}' in <b>HomepagePresenter.php</b> on line <b>15</b>
or uncaught exception:
<b>Fatal error</b>: Uncaught Nette\MemberAccessException: Call to undefined method Nette\Application\UI\Form::addTest()? in /sandbox/vendor/nette/utils/src/Utils/ObjectMixin.php:100
Stack trace:
#0 /sandbox/vendor/nette/utils/src/Utils/Object.php(75): Nette\Utils\ObjectMixin::call(Object(Nette\Application\UI\Form), 'addTest', Array)
#1 /sandbox/app/forms/SignFormFactory.php(32): Nette\Object->__call('addTest', Array)
#2 /sandbox/app/presenters/SignPresenter.php(21): App\Forms\SignFormFactory->create()
#3 /sandbox/vendor/nette/component-model/src/ComponentModel/Container.php(181): App\Presenters\SignPresenter->createComponentSignInForm('signInForm')
#4 /sandbox/vendor/nette/component-model/src/ComponentModel/Container.php(139): Nette\ComponentModel\Container->createComponent('signInForm')
#5 /sandbox/temp/cache/latte/15206b353f351f6bfca2c36cc.php(17): Nette\ComponentModel\Co in <b>/sandbox/vendor/nette/utils/src/Utils/ObjectMixin.php</b> on line <b>100</b><br />
A pretty error handling library.
Whoops is an error handler framework for PHP. Out-of-the-box, it provides a pretty error interface that helps you debug your web projects, but at heart it's a simple yet powerful stacked error handling system.
If you use Laravel 4, Laravel 5.5+ or Mezzio, you already have Whoops. There are also community-provided instructions on how to integrate Whoops into Silex 1, Silex 2, Phalcon, Laravel 3, Laravel 5, CakePHP 3, CakePHP 4, Zend 2, Zend 3, Yii 1, FuelPHP, Slim, Pimple, Laminas, or any framework consuming StackPHP middlewares or PSR-7 middlewares.
If you are not using any of these frameworks, here's a very simple way to install:
Use Composer to install Whoops into your project:
composer require filp/whoops
Register the pretty handler in your code:
$whoops = new \Whoops\Run;
$whoops->pushHandler(new \Whoops\Handler\PrettyPageHandler);
$whoops->register();
For more options, have a look at the example files in examples/
to get a feel for how things work. Also take a look at the API Documentation and the list of available handlers below.
You may also want to override some system calls Whoops does. To do that, extend Whoops\Util\SystemFacade
, override functions that you want and pass it as the argument to the Run
constructor.
You may also collect the HTML generated to process it yourself:
$whoops = new \Whoops\Run;
$whoops->allowQuit(false);
$whoops->writeToOutput(false);
$whoops->pushHandler(new \Whoops\Handler\PrettyPageHandler);
$html = $whoops->handleException($e);
whoops currently ships with the following built-in handlers, available in the Whoops\Handler
namespace:
PrettyPageHandler
- Shows a pretty error page when something goes pants-upPlainTextHandler
- Outputs plain text message for use in CLI applicationsCallbackHandler
- Wraps a closure or other callable as a handler. You do not need to use this handler explicitly, whoops will automatically wrap any closure or callable you pass to Whoops\Run::pushHandler
JsonResponseHandler
- Captures exceptions and returns information on them as a JSON string. Can be used to, for example, play nice with AJAX requests.XmlResponseHandler
- Captures exceptions and returns information on them as a XML string. Can be used to, for example, play nice with AJAX requests.You can also use pluggable handlers, such as SOAP handler.
Thank you for following this article.
Debugging PHP with XDebug and VsCode
1662437040
A tiny JavaScript debugging utility modelled after Node.js core's debugging technique. Works in Node.js and web browsers.
$ npm install debug
debug
exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error
for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.
Example app.js:
var debug = require('debug')('http')
, http = require('http')
, name = 'My App';
// fake app
debug('booting %o', name);
http.createServer(function(req, res){
debug(req.method + ' ' + req.url);
res.end('hello\n');
}).listen(3000, function(){
debug('listening');
});
// fake worker of some kind
require('./worker');
Example worker.js:
var a = require('debug')('worker:a')
, b = require('debug')('worker:b');
function work() {
a('doing lots of uninteresting work');
setTimeout(work, Math.random() * 1000);
}
work();
function workb() {
b('doing some work');
setTimeout(workb, Math.random() * 2000);
}
workb();
The DEBUG
environment variable is then used to enable these based on space or comma-delimited names.
Here are some examples:
CMD
On Windows the environment variable is set using the set
command.
set DEBUG=*,-not_this
Example:
set DEBUG=* & node app.js
PowerShell (VS Code default)
PowerShell uses different syntax to set environment variables.
$env:DEBUG = "*,-not_this"
Example:
$env:DEBUG='app';node app.js
Then, run the program to be debugged as usual.
npm script example:
"windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js",
Every debug instance has a color generated for it based on its namespace name. This helps when visually parsing the debug output to identify which debug instance a debug line belongs to.
In Node.js, colors are enabled when stderr is a TTY. You also should install the supports-color
module alongside debug, otherwise debug will only use a small handful of basic colors.
Colors are also enabled on "Web Inspectors" that understand the %c
formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).
When actively developing an application it can be useful to see when the time spent between one debug()
call and the next. Suppose for example you invoke debug()
before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls.
When stdout is not a TTY, Date#toISOString()
is used, making it more useful for logging the debug information as shown below:
If you're using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output.
The *
character may be used as a wildcard. Suppose for example your library has debuggers named "connect:bodyParser", "connect:compress", "connect:session", instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session
, you may simply do DEBUG=connect:*
, or to run everything using this module simply use DEBUG=*
.
You can also exclude specific debuggers by prefixing them with a "-" character. For example, DEBUG=*,-connect:*
would include all debuggers except those starting with "connect:".
When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:
Name | Purpose |
---|---|
DEBUG | Enables/disables specific debugging namespaces. |
DEBUG_HIDE_DATE | Hide date from debug output (non-TTY). |
DEBUG_COLORS | Whether or not to use colors in the debug output. |
DEBUG_DEPTH | Object inspection depth. |
DEBUG_SHOW_HIDDEN | Shows hidden properties on inspected objects. |
Note: The environment variables beginning with DEBUG_
end up being converted into an Options object that gets used with %o
/%O
formatters. See the Node.js documentation for util.inspect()
for the complete list.
Debug uses printf-style formatting. Below are the officially supported formatters:
Formatter | Representation |
---|---|
%O | Pretty-print an Object on multiple lines. |
%o | Pretty-print an Object all on a single line. |
%s | String. |
%d | Number (both integer and float). |
%j | JSON. Replaced with the string '[Circular]' if the argument contains circular references. |
%% | Single percent sign ('%'). This does not consume an argument. |
You can add custom formatters by extending the debug.formatters
object. For example, if you wanted to add support for rendering a Buffer as hex with %h
, you could do something like:
const createDebug = require('debug')
createDebug.formatters.h = (v) => {
return v.toString('hex')
}
// …elsewhere
const debug = createDebug('foo')
debug('this is hex: %h', new Buffer('hello world'))
// foo this is hex: 68656c6c6f20776f726c6421 +0ms
You can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don't want to build it yourself.
Debug's enable state is currently persisted by localStorage
. Consider the situation shown below where you have worker:a
and worker:b
, and wish to debug both. You can enable this using localStorage.debug
:
localStorage.debug = 'worker:*'
And then refresh the page.
a = debug('worker:a');
b = debug('worker:b');
setInterval(function(){
a('doing some work');
}, 1000);
setInterval(function(){
b('doing some work');
}, 1200);
In Chromium-based web browsers (e.g. Brave, Chrome, and Electron), the JavaScript console will—by default—only show messages logged by debug
if the "Verbose" log level is enabled.
By default debug
will log to stderr, however this can be configured per-namespace by overriding the log
method:
Example stdout.js:
var debug = require('debug');
var error = debug('app:error');
// by default stderr is used
error('goes to stderr!');
var log = debug('app:log');
// set this namespace to log via console.log
log.log = console.log.bind(console); // don't forget to bind to console!
log('goes to stdout');
error('still goes to stderr!');
// set all output to go via console.info
// overrides all per-namespace log settings
debug.log = console.info.bind(console);
error('now goes to stdout via console.info');
log('still goes to stdout, but via console.info now');
You can simply extend debugger
const log = require('debug')('auth');
//creates new debug instance with extended namespace
const logSign = log.extend('sign');
const logLogin = log.extend('login');
log('hello'); // auth hello
logSign('hello'); //auth:sign hello
logLogin('hello'); //auth:login hello
You can also enable debug dynamically by calling the enable()
method :
let debug = require('debug');
console.log(1, debug.enabled('test'));
debug.enable('test');
console.log(2, debug.enabled('test'));
debug.disable();
console.log(3, debug.enabled('test'));
print :
1 false
2 true
3 false
Usage :enable(namespaces)
namespaces
can include modes separated by a colon and wildcards.
Note that calling enable()
completely overrides previously set DEBUG variable :
$ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))'
=> false
disable()
Will disable all namespaces. The functions returns the namespaces currently enabled (and skipped). This can be useful if you want to disable debugging temporarily without knowing what was enabled to begin with.
For example:
let debug = require('debug');
debug.enable('foo:*,-foo:bar');
let namespaces = debug.disable();
debug.enable(namespaces);
Note: There is no guarantee that the string will be identical to the initial enable string, but semantically they will be identical.
After you've created a debug instance, you can determine whether or not it is enabled by checking the enabled
property:
const debug = require('debug')('http');
if (debug.enabled) {
// do stuff...
}
You can also manually toggle this property to force the debug instance to be enabled or disabled.
Due to the way debug
detects if the output is a TTY or not, colors are not shown in child processes when stderr
is piped. A solution is to pass the DEBUG_COLORS=1
environment variable to the child process.
For example:
worker = fork(WORKER_WRAP_PATH, [workerPath], {
stdio: [
/* stdin: */ 0,
/* stdout: */ 'pipe',
/* stderr: */ 'pipe',
'ipc',
],
env: Object.assign({}, process.env, {
DEBUG_COLORS: 1 // without this settings, colors won't be shown
}),
});
worker.stderr.pipe(process.stderr, { end: false });
Author: Debug-js
Source Code: https://github.com/debug-js/debug
License: MIT license
1661048820
The {boomer} package provides debugging tools that let you inspect the intermediate results of a call. The output looks as if we explode a call into its parts hence the name.
boom()
prints the intermediate results of a call or a code chunk.rig()
creates a copy of a function which will display the intermediate results of all the calls of it body.rig_in_namespace()
rigs a namespaced function in place, so its always verbose even when called by other existing functions. It is especially handy for package development.Install CRAN version with:
install.packages("boomer")
Or development version with:
remotes::install_github("moodymudskipper/boomer")
boom()
library(boomer)
boom(1 + !1 * 2)
boom(subset(head(mtcars, 2), qsec > 17))
You can use boom()
with {magrittr} pipes, just pipe to boom()
at the end of a pipe chain.
library(magrittr)
mtcars %>%
head(2) %>%
subset(qsec > 17) %>%
boom()
If a call fails, {boomer} will print intermediate outputs up to the occurrence of the error, it can help with debugging:
"tomato" %>%
substr(1, 3) %>%
toupper() %>%
sqrt() %>%
boom()
boom()
features optional arguments :
clock
: set to TRUE
to see how long each step (in isolation!) took to run.
print
: set to a function such as str
to change what is printed (see ?boom
to see how to print differently depending on class). Useful alternatives would be dplyr::glimpse
of invisible
(to print nothing).
One use case is when the output is too long.
boom(lapply(head(cars), sqrt), clock = TRUE, print = str)
boom()
also works works on loops and multi-line expression.
boom(for(i in 1:3) paste0(i, "!"))
rig()
rig()
a function in order to boom()
its body, its arguments are printed by default when they’re evaluated.
hello <- function(x) {
if(!is.character(x) | length(x) != 1) {
stop("`x` should be a string")
}
paste0("Hello ", x, "!")
}
rig(hello)("world")
rig_in_namespace()
rig_in_namespace()
was designed to assist package development. Functions are rigged in place and we can explode the calls of the bodies of several functions at a time.
For instance you might have these functions in a package :
cylinder_vol <- function(r, h) {
h * disk_area(r)
}
disk_area <- function(r) {
pi * r^2
}
cylinder_vol
depends on disk_area
, call devtools::load_all()
then rig_in_namespace()
on both and enjoy the detailed output:
devtools::load_all()
rig_in_namespace(cylinder_vol, disk_area)
cylinder_vol(3,10)
To avoid typing boom()
all the time you can use the provided addin named “Explode a call with boom()
”: just attribute a key combination to it (I use ctrl+shift+alt+B on windows), select the call you’d like to explode and fire away!
Several options are proposed to weak he printed output of {boomer}’s functions and addin, see ?boomer
to learn about them.
In particular on some operating systems *{boomer}*’s functions’ output might not always look good in markdown report or reprexes. It’s due to how he system handles UTF-8 characters. In this case one can use options(boomer.safe_print = TRUE)
for a more satisfactory input.
{boomer} prints the output of intermediate steps as they are executed, and thus doesn’t say anything about what isn’t executed, it is in contrast with functions like lobstr::ast()
which return the parse tree.
Thanks to @data_question for suggesting the name {boomer} on twitter.
Author: Moodymudskipper
Source Code: https://github.com/moodymudskipper/boomer
1660248120
This package contains simple utilities that may help debug julia code.
Install with
pkg> dev https://github.com/timholy/DebuggingUtilities.jl.git
When you use it in packages, you should activate
the project and add DebuggingUtilities as a dependency use project> dev DebuggingUtilities
.
@showln
shows variable values and the line number at which the statement was executed. This can be useful when variables change value in the course of a single function. For example:
using DebuggingUtilities
function foo()
x = 5
@showln x
x = 7
@showln x
nothing
end
might, when called (foo()
), produce output like
x = 5
(in /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:5)
x = 7
(in /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:7)
7
@showlnt
is for recursion, and uses indentation to show nesting depth. For example,
function recurses(n)
@showlnt n
n += 1
@showlnt n
if n < 10
n = recurses(n+1)
end
return n
end
might, when called as recurses(1)
, generate
n = 1
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
n = 2
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)
n = 3
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
n = 4
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)
n = 5
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
n = 6
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)
n = 7
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
n = 8
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)
n = 9
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:10)
n = 10
(in recurses at /home/tim/.julia/dev/DebuggingUtilities/test/funcdefs.jl:12)
Each additional space indicates one additional layer in the call chain. Most of the initial space (even for n=1
) is due to Julia's own REPL.
This is similar to include
, except it displays progress. This can be useful in debugging long scripts that cause, e.g., segfaults.
Also similar to include
, but it also measures the execution time of each expression, and prints them in order of increasing duration.
Author: Timholy
Source Code: https://github.com/timholy/DebuggingUtilities.jl
License: View license