1660279620
This package is used to dynamically generate a demo page and integrate with Documenter.jl.
Let's focus on writing demos
Documenter.jl
to manage all your demos.The philosophy of DemoCards is "folder structure is the structure of demos"; organizing folders and files in the a structural way, then DemoCards.jl
will help manage how you navigate through the pages.
examples
├── part1
│ ├── assets
│ ├── demo_1.md
│ ├── demo_2.md
│ └── demo_3.md
└── part2
├── demo_4.jl
└── demo_5.jl
DemoCards would understand it in the following way:
# Examples
## Part1
demo_1.md
demo_2.md
demo_3.md
## Part2
demo_4.jl
demo_5.jl
Read the Quick Start for more instructions.
repo | theme |
---|---|
AlgebraOfGraphics.jl | |
Augmentor.jl | |
Bokeh.jl | |
FractionalDiffEq.jl | |
LeetCode.jl | |
Images.jl | |
ImageMorphology.jl | |
ReinforcementLearning.jl | |
Plots.jl |
The use of this package heavily relies on Documenter.jl, Literate.jl, Mustache.jl and others. Unfortunately, I'm not a contributor of any. This package also uses a lot of Regex, which I know little.
The initial purpose of this package is to set up the demo page of JuliaImages. I'm not sure how broadly this package suits the need of others, but I'd like to accept any issues/PRs on improving the usage experience.
Author: JuliaDocs/
Source Code: https://github.com/JuliaDocs/DemoCards.jl
License: MIT license
#julia #card #demo #documentation
1660279620
This package is used to dynamically generate a demo page and integrate with Documenter.jl.
Let's focus on writing demos
Documenter.jl
to manage all your demos.The philosophy of DemoCards is "folder structure is the structure of demos"; organizing folders and files in the a structural way, then DemoCards.jl
will help manage how you navigate through the pages.
examples
├── part1
│ ├── assets
│ ├── demo_1.md
│ ├── demo_2.md
│ └── demo_3.md
└── part2
├── demo_4.jl
└── demo_5.jl
DemoCards would understand it in the following way:
# Examples
## Part1
demo_1.md
demo_2.md
demo_3.md
## Part2
demo_4.jl
demo_5.jl
Read the Quick Start for more instructions.
repo | theme |
---|---|
AlgebraOfGraphics.jl | |
Augmentor.jl | |
Bokeh.jl | |
FractionalDiffEq.jl | |
LeetCode.jl | |
Images.jl | |
ImageMorphology.jl | |
ReinforcementLearning.jl | |
Plots.jl |
The use of this package heavily relies on Documenter.jl, Literate.jl, Mustache.jl and others. Unfortunately, I'm not a contributor of any. This package also uses a lot of Regex, which I know little.
The initial purpose of this package is to set up the demo page of JuliaImages. I'm not sure how broadly this package suits the need of others, but I'd like to accept any issues/PRs on improving the usage experience.
Author: JuliaDocs/
Source Code: https://github.com/JuliaDocs/DemoCards.jl
License: MIT license
1598740560
Giving your novel a strong sense of place is vital to doing your part to engage the readers without confusing or frustrating them. Setting is a big part of this (though not the whole enchilada — there is also social context and historic period), and I often find writing students and consulting clients erring on one of two extremes.
**Either: **Every scene is set in a different, elaborately-described place from the last. This leads to confusion (and possibly exhaustion and impatience) for the reader, because they have no sense of what they need to actually pay attention to for later and what’s just…there. Are the details of that forest in chapter 2 important? Will I ever be back in this castle again? Is there a reason for this character to be in this particular room versus the one she was in the last time I saw her? Who knows!
Or: There are few or no clues at all as to where the characters are in a scene. What’s in the room? Are they even in a room? Are there other people in th — ope, yes, there are, someone just materialized, what is happening? This all leads to the dreaded “brains in jars” syndrome. That is, characters are only their thoughts and words, with no grounding in the space-time continuum. No one seems to be in a place, in a body, at a time of day.
Everything aspect of writing a novel comes with its difficulties, and there are a lot of moving pieces to manage and deploy in the right balance. When you’re a newer writer, especially, there’s something to be said for keeping things simple until you have a handle on how to manage the arc and scope of a novel-length work. And whether you tend to overdo settings or underdo them, you can learn something from TV, especially classic sitcoms.
Your basic “live studio audience” sitcoms are performed and filmed on sets built inside studios vs. on location. This helps keep production expenses in check and helps the viewer feel at home — there’s a reliable and familiar container to hold the story of any given episode. The writers on the show don’t have to reinvent the wheel with every script.
Often, a show will have no more than two or three basic sets that are used episode to episode, and then a few other easily-understood sets (characters’ workplaces, restaurants, streets scenes) are also used regularly but not every episode.
#creative-writing #writing-exercise #writing-craft #writing #writing-tips #machine learning
1607333509
We understand the paint that understudies need to experience every day considering the course that at whatever point you have completed all your astute necessities for your Ph.D. or of course MA, you’ll need to begin searching after the piece. So what do you think? Okay have the choice to control it with no other individual, or you require dissertation help from a readied capable? As a general rule, it’s a long cycle as you need to make a broad paper out of around 10 to 20 thousand words starting with the presentation of your subject and some time later your explanations behind why it should be dissected. Following the model, you’ll need to make a creation audit that will join the pushing assessments that will other than control your paper’s theoretical structure. It sounds tangled. Really, it is.
This assignment is one of the most tiring and unusual ones an understudy needs to look during his 16 to 18 years of training. In any case, once more, we have your back this time too. I handle the focuses above may have made you anxious about this undertaking, in any case we have proficient **dissertation service**s who give marvelous work affiliations. You’re at the ideal spot and that too at the ideal time since we have a markdown on the all out of our creation relationship at TheWritingPlanet. Subsequently, it’s the best an ideal open entry for you to benefit of our affiliations while you discharge your anxiety by chilling. In any case, it is commonly your decision to do it without anyone’s help or select an expert to do it for you.
You comprehend that a recommendation or hypothesis is unquestionably new for you, and you’ll need to battle while guiding it, so it will require an enormous heap of effort for you to begin and thusly finish it. Do you think you have great event to do it? Or of course plainly would you have the decision to do it as enough as an expert would do it? Contemplating everything, you’ll find your answer when you utilize TheWritingPlanet’s piece benefits that will help you in creation your hypothesis, proposition, or reference paper. Considering everything, remember you take hypothesis help from our association. Considering, you’ll be getting the help of a Ph.D. degree holder who has enormous combination with making a few affiliations and proposal papers beginning at now. It would other than help on the off chance that you didn’t stress over passing marks since you will accomplish the most raised of all. Do you know why? This is on the grounds that our party of journalists wires Ph.D. holders who were top graders in their particular fields and schools. Accordingly, you will get ensured results at TheWritingPlanet.
Is it affirmed that you are pushed considering the way that your cutoff time is moving closer?
Cutoff time a basic bit of the time changes into a shocking dream for understudies, and they everything thought about breeze up zeroing in on themselves. It moreover causes authentic clinical issues, for example, prepared assaults and dread assaults. Thinking about these parts, quite a while back, we started our trim office, which has appeared at the raised level at this point. In a little while, starting at now, the understudies are in an unclear condition, and we are their confirmations.
Teachers don’t like that the time they give for finishing the speculation isn’t sufficient for an understudy considering the way that an epic piece of the understudies have a colossal heap of different things on their can list. Some of them need to consider their family; some need to deal with their positions while some sit back pushing. Thusly, if your cutoff time is drawing nearer soon, you’re so far in no disposition regardless, your suggestion paper with no other individual, by then you’ll truly need to pick and hand over your undertaking to us. We won’t let you gobble up additional time since it continues moving endlessly from our hands, and we don’t have anything left back near the end.
We can help you at essential occasions when you perceive its absolutely unfathomable left you can do your article or proposition paper. On the off chance that you complete these assignments by us, you’ll clear as can be get your scholastics direct on target with surprising outcomes. Our makers are the best in their fields, and they outfit our customers with the best affiliations.
#custom-writing-services #write-my-paper #the-writing-planet #cheap-dissertation #the-writing-planet
1680009180
Throughout this tutorial, we will explore methods for reading, writing, and editing CSV (Comma-Separated Values) files using the Python standard library “csv”.
Due to the popularity of CSV files for databasing, these methods will prove crucial to programmers across different fields of work.
CSV files are not standardized. Regardless, there are some common structures seen in all sorts of CSV files. In most cases, the first line of a CSV file is reserved for the headers of the columns of the files.
The lines following each form a row of the data where the fields are sorted in the order matching the first row. As the name suggests, data values are usually separated by a comma, however, other delimiters can be used.
Lastly, some CSV files will use double quotes when key characters are being used within a field.
All the examples used throughout this tutorial will be based on the following dummy data files: basic.csv, multiple_delimiters.csv, and new_delimiter.csv.
First, we will examine the simplest case: reading an entire CSV file and printing each item read in.
import csv
path = "data/basic.csv"
with open(path, newline='') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
for col in row:
print(col,end=" ")
print()
Let us break down this code. The only library needed to work with CSV files is the “csv” Python library. After importing the library and setting the path of our CSV file, we use the “open()” method to begin reading the file line by line.
The parsing of the CSV file is handled by the “csv.reader()” method which is discussed in detail later.
Each row of our CSV file will be returned as a list of strings that can be handled in any way you please. Here is the output of the code above:
Frequently in practice, we do not wish to store the headers of the columns of the CSV file. It is standard to store the headers on the first line of the CSV.
Luckily, “csv.reader()” tracks how many lines have been read in the “line_num” object. Using this object, we can simply skip the first line of the CSV file.
import csv
path = "data/basic.csv"
with open(path, newline='') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
if(reader.line_num != 1):
for col in row:
print(col,end=" ")
print()
In the code above, we create an object called “reader” which is assigned the value returned by “csv.reader()”.
reader = csv.reader(csvfile)
The “csv.reader()” method takes a few useful parameters. We will only focus on two: the “delimiter” parameter and the “quotechar”. By default, these parameters take the values “,” and ‘”‘.
We will discuss the delimiter parameter in the next section.
The “quotechar” parameter is a single character that is used to define fields with special characters. In our example, all our header files have these quote characters around them.
This allows us to include a space character in the header “Favorite Color”. Notice how the result changes if we change our “quotechar” to the “|” symbol.
import csv
path = "data/basic.csv"
with open(path, newline='') as csvfile:
reader = csv.reader(csvfile, quotechar='|')
for row in reader:
if(reader.line_num != 0):
for col in row:
print(col,end="\t")
print()
Changing the “quotechar” from ‘”‘ to “|” resulted in the double quotes appearing around the headers.
Reading a single column from a CSV is simple using our method above. Our row elements are a list containing the column elements.
Therefore, instead of printing out the entire row, we will only print out the desired column element from each row. For our example, we will print out the second column.
import csv
path = "data/basic.csv"
with open(path, newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
print(row[1])
CSV files frequently use the “,” symbol to distinguish between data values. In fact, the comma symbol is the default delimiter for the csv.reader() method.
In practice though, data files may use other symbols to distinguish between data values. For example, examine the contents of a CSV file (called new_delimiter.csv) which uses “;” to delimit between data values.
Reading in this CSV file to Python is simple if we alter the “delimiter” parameter of the “csv.reader()” method.
reader = csv.reader(csvfile, delimiter=';')
Notice how we changed the delimiter argument from “,” to “;”. The “csv.reader()” method will parse our CSV file as expected with this simple change.
import csv
path = "data/new_delimiter.csv"
with open(path, newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=';')
for row in reader:
if(reader.line_num != 0):
for col in row:
print(col,end="\t")
print()
The standard CSV package in python cannot handle multiple delimiters. In order to deal with such cases, we will use the standard package “re”.
The following example parses the CSV file “multiple_delimiters.csv”. Looking at the structure of the data in “multiple_delimters.csv”, we see the headers are delimited with commas and the remaining rows are delimited with a comma, a vertical bar, and the text “Delimiter”.
The core function to accomplishing the desired parsing is the “re.split()” method which will take two strings as arguments: a highly structured string denoting the delimiters and a string to be split at those delimiters. First, let us see the code and output.
import re
path = "data/multiple_delimiters.csv"
with open(path, newline='') as csvfile:
for row in csvfile:
row = re.split('Delimiter|[|]|,|\n', row)
for field in row:
print(field, end='\t')
print()
The key component of this code is the first parameter of “re.split()”.
'Delimiter|[|]|,|\n'
Each split point is separated by the symbol “|”. Since this symbol is also a delimiter in our text, we must put brackets around it to escape the character.
Lastly, we put the “\n” character as a delimiter so that the newline will not be included in the final field of each row. To see the importance of this, examine the result without “\n” included as a split point.
import re
path = "data/multiple_delimiters.csv"
with open(path, newline='') as csvfile:
for row in csvfile:
row = re.split('Delimiter|[|]|,', row)
for field in row:
print(field, end='\t')
print()
Notice the extra spacing between each row of our output.
Writing to a CSV file will follow a similar structure to how we read the file. However, instead of printing the data, we will use the “writer” object within “csv” to write the data.
First, we will do the simplest example possible: creating a CSV file and writing a header and some data in it.
import csv
path = "data/write_to_file.csv"
with open(path, 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(['h1'] + ['h2'] + ['h3'])
i = 0
while i < 5:
writer.writerow([i] + [i+1] + [i+2])
i = i+1
In this example, we instantiate the “writer” object with the “csv.writer()” method. After doing so, simply calling the “writerow()” method will write the list of strings onto the next row in our file with the default delimiter “,” placed between each field element.
Editing the contents of an existing CSV file will require the following steps: read in the CSV file data, edit the lists (Update information, append new information, delete information), and then write the new data back to the CSV file.
For our example, we will be editing the file created in the last section “write_to_file.csv”.
Our goal will be to double the values of the first row of data, delete the second row, and append a row of data at the end of the file.
import csv
path = "data/write_to_file.csv"
#Read in Data
rows = []
with open(path, newline='') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
rows.append(row)
#Edit the Data
rows[1] = ['0','2','4']
del rows[2]
rows.append(['8','9','10'])
#Write the Data to File
with open(path, 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerows(rows)
Using the techniques discussed in the prior sections, we read the data and stored the lists in a variable called “rows”. Since all the elements were Python lists, we made the edits using standard list methods.
We opened the file in the same manner as before. The only difference when writing was our use of the “writerows()” method instead of the “writerow()” method.
We have created a natural way to search and replace a CSV file through the process discussed in the last section. In the example above, we read each line of the CSV file into a list of lists called “rows”.
Since “rows” is a list object, we can use Pythons list methods to edit our CSV file before writing it back to a file. We used some list methods in the example, but another useful method is the “list.replace()” method which takes two arguments: first a string to be found, and then the string to replace the found string with.
For example, to replace all ‘3’s with ’10’s we could have done
for row in rows:
row = [field.replace('3','10') for field in row]
Similarly, if the data is imported as a dictionary object (as discussed later), we can use Python’s dictionary methods to edit the data before re-writing to the file.
Pythons “csv” library also provides a convenient method for writing dictionaries into a CSV file.
import csv
Dictionary1 = {'header1': '5', 'header2': '10', 'header3': '13'}
Dictionary2 = {'header1': '6', 'header2': '11', 'header3': '15'}
Dictionary3 = {'header1': '7', 'header2': '18', 'header3': '17'}
Dictionary4 = {'header1': '8', 'header2': '13', 'header3': '18'}
path = "data/write_to_file.csv"
with open(path, 'w', newline='') as csvfile:
headers = ['header1', 'header2', 'header3']
writer = csv.DictWriter(csvfile, fieldnames=headers)
writer.writeheader()
writer.writerow(Dictionary1)
writer.writerow(Dictionary2)
writer.writerow(Dictionary3)
writer.writerow(Dictionary4)
In this example, we have four dictionaries with the same keys. It is crucial that the keys match the header names you want in the CSV file.
Since we will be inputting our rows as dictionary objects, we instantiate our writer object with the “csv.DictWriter()” method and specify our headers.
After this is done, it is as simple as calling the “writerow()” method to begin writing to our CSV file.
The CSV library also provides an intuitive “csv.DictReader()” method which inputs the rows from a CSV file into a dictionary object. Here is a simple example.
import csv
path = "data/basic.csv"
with open(path, newline='') as csvfile:
reader = csv.DictReader(csvfile, delimiter=',')
for row in reader:
print(row)
As we can see in the output, each row was stored as a dictionary object.
If we wish to split a large CSV file into smaller CSV files, we use the following steps: input the file as a list of rows, write the first half of the rows to one file and write the second half of the rows to another.
Here is a simple example where we turn “basic.csv” into “basic_1.csv” and “basic_2.csv”.
import csv
path = "data/basic.csv"
#Read in Data
rows = []
with open(path, newline='') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
rows.append(row)
Number_of_Rows = len(rows)
#Write Half of the Data to a File
path = "data/basic_1.csv"
with open(path, 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(rows[0]) #Header
for row in rows[1:int((Number_of_Rows+1)/2)]:
writer.writerow(row)
#Write the Second Half of the Data to a File
path = "data/basic_2.csv"
with open(path, 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(rows[0]) #Header
for row in rows[int((Number_of_Rows+1)/2):]:
writer.writerow(row)
basic_1.csv:
basic_2.csv:
In these examples, no new methods were used. Instead, we had two separate while loops for handling the first and second half of writing to the two CSV files.
Original article source at: https://likegeeks.com/
1648972740
Generis
Versatile Go code generator.
Generis is a lightweight code preprocessor adding the following features to the Go language :
package main;
// -- IMPORTS
import (
"html"
"io"
"log"
"net/http"
"net/url"
"strconv"
);
// -- DEFINITIONS
#define DebugMode
#as true
// ~~
#define HttpPort
#as 8080
// ~~
#define WriteLine( {{text}} )
#as log.Println( {{text}} )
// ~~
#define local {{variable}} : {{type}};
#as var {{variable}} {{type}};
// ~~
#define DeclareStack( {{type}}, {{name}} )
#as
// -- TYPES
type {{name}}Stack struct
{
ElementArray []{{type}};
}
// -- INQUIRIES
func ( stack * {{name}}Stack ) IsEmpty(
) bool
{
return len( stack.ElementArray ) == 0;
}
// -- OPERATIONS
func ( stack * {{name}}Stack ) Push(
element {{type}}
)
{
stack.ElementArray = append( stack.ElementArray, element );
}
// ~~
func ( stack * {{name}}Stack ) Pop(
) {{type}}
{
local
element : {{type}};
element = stack.ElementArray[ len( stack.ElementArray ) - 1 ];
stack.ElementArray = stack.ElementArray[ : len( stack.ElementArray ) - 1 ];
return element;
}
#end
// ~~
#define DeclareStack( {{type}} )
#as DeclareStack( {{type}}, {{type:PascalCase}} )
// -- TYPES
DeclareStack( string )
DeclareStack( int32 )
// -- FUNCTIONS
func HandleRootPage(
response_writer http.ResponseWriter,
request * http.Request
)
{
local
boolean : bool;
local
natural : uint;
local
integer : int;
local
real : float64;
local
escaped_html_text,
escaped_url_text,
text : string;
local
integer_stack : Int32Stack;
boolean = true;
natural = 10;
integer = 20;
real = 30.0;
text = "text";
escaped_url_text = "&escaped text?";
escaped_html_text = "<escaped text/>";
integer_stack.Push( 10 );
integer_stack.Push( 20 );
integer_stack.Push( 30 );
#write response_writer
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title><%= request.URL.Path %></title>
</head>
<body>
<% if ( boolean ) { %>
<%= "URL : " + request.URL.Path %>
<br/>
<%@ natural %>
<%# integer %>
<%& real %>
<br/>
<%~ text %>
<%^ escaped_url_text %>
<%= escaped_html_text %>
<%= "<%% ignored %%>" %>
<%% ignored %%>
<% } %>
<br/>
Stack :
<br/>
<% for !integer_stack.IsEmpty() { %>
<%# integer_stack.Pop() %>
<% } %>
</body>
</html>
#end
}
// ~~
func main()
{
http.HandleFunc( "/", HandleRootPage );
#if DebugMode
WriteLine( "Listening on http://localhost:HttpPort" );
#end
log.Fatal(
http.ListenAndServe( ":HttpPort", nil )
);
}
Constants and generic code can be defined with the following syntax :
#define old code
#as new code
#define old code
#as
new
code
#end
#define
old
code
#as new code
#define
old
code
#as
new
code
#end
The #define
directive can contain one or several parameters :
{{variable name}} : hierarchical code (with properly matching brackets and parentheses)
{{variable name#}} : statement code (hierarchical code without semicolon)
{{variable name$}} : plain code
{{variable name:boolean expression}} : conditional hierarchical code
{{variable name#:boolean expression}} : conditional statement code
{{variable name$:boolean expression}} : conditional plain code
They can have a boolean expression to require they match specific conditions :
HasText text
HasPrefix prefix
HasSuffix suffix
HasIdentifier text
false
true
!expression
expression && expression
expression || expression
( expression )
The #define
directive must not start or end with a parameter.
The #as
directive can use the value of the #define
parameters :
{{variable name}}
{{variable name:filter function}}
{{variable name:filter function:filter function:...}}
Their value can be changed through one or several filter functions :
LowerCase
UpperCase
MinorCase
MajorCase
SnakeCase
PascalCase
CamelCase
RemoveComments
RemoveBlanks
PackStrings
PackIdentifiers
ReplacePrefix old_prefix new_prefix
ReplaceSuffix old_suffix new_suffix
ReplaceText old_text new_text
ReplaceIdentifier old_identifier new_identifier
AddPrefix prefix
AddSuffix suffix
RemovePrefix prefix
RemoveSuffix suffix
RemoveText text
RemoveIdentifier identifier
Conditional code can be defined with the following syntax :
#if boolean expression
#if boolean expression
...
#else
...
#end
#else
#if boolean expression
...
#else
...
#end
#end
The boolean expression can use the following operators :
false
true
!expression
expression && expression
expression || expression
( expression )
Templated HTML code can be sent to a stream writer using the following syntax :
#write writer expression
<% code %>
<%@ natural expression %>
<%# integer expression %>
<%& real expression %>
<%~ text expression %>
<%= escaped text expression %>
<%! removed content %>
<%% ignored tags %%>
#end
--join
option requires to end the statements with a semicolon.#writer
directive is only available for the Go language.Install the DMD 2 compiler (using the MinGW setup option on Windows).
Build the executable with the following command line :
dmd -m64 generis.d
generis [options]
--prefix # : set the command prefix
--parse INPUT_FOLDER/ : parse the definitions of the Generis files in the input folder
--process INPUT_FOLDER/ OUTPUT_FOLDER/ : reads the Generis files in the input folder and writes the processed files in the output folder
--trim : trim the HTML templates
--join : join the split statements
--create : create the output folders if needed
--watch : watch the Generis files for modifications
--pause 500 : time to wait before checking the Generis files again
--tabulation 4 : set the tabulation space count
--extension .go : generate files with this extension
generis --process GS/ GO/
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder.
generis --process GS/ GO/ --create
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder, creating the output folders if needed.
generis --process GS/ GO/ --create --watch
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder, creating the output folders if needed and watching the Generis files for modifications.
generis --process GS/ GO/ --trim --join --create --watch
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder, trimming the HTML templates, joining the split statements, creating the output folders if needed and watching the Generis files for modifications.
2.0
Author: Senselogic
Source Code: https://github.com/senselogic/GENERIS
License: View license