Read a string char by char from an associative pointer array

I hope the title of my question is correct.

I hope the title of my question is correct.

I have a stringpool

char **string_pool;

that gets initiated in a function like

string_pool = malloc( sizeof(char *) * 1024);

i get strings from stdin and scanf them into the array

scanf("%[^\n]s", &string_pool[index]);

so i can print it out using printf

printf("gets %s\n", &string_pool[index]);

how can i

  • get the length of string_pool[index]
  • read string_pool[index] char by char in a loop

Thank you

Edit

Maybe i should explain it a bit more, its a virtual machine with a virtual instruction set and a program like

push 1
read
gets

should :

  • push 1 on the stack -> let x be 1
  • read stdin as string into string_pool[x]
  • push all characters onto the stack

the functions looks like

    case GETS: {
        int index = popv(); // index is already on top of the stack
        int strl = strlen(&string_pool[index]);
    printf("gets %s with a length of %d\n", &string_pool[index], strl);
    // pseudo code
    // push each char as integer on the stack
    foreach(char in string_pool[index]) push((int)char);

    break;
}

case READ: {  
    int index = popv();          
    scanf("%[^\n]s", &string_pool[index]);
    break;
}

case WRITE: {  
    int index = popv();          
    printf("%s", &string_pool[index]);
    break;
}

My problem is in the GETS case. I want to push every char as int onto the stack.

Validate a Credit Card Number with Javascript, Ruby, and C

Validate a Credit Card Number with Javascript, Ruby, and C

This post is primarily a comparison between a lower level language vs a higher level language. If you would like to see how I implemented the credit card checker, check out my code in C, Ruby, or Javascript

Credit card companies are responsible for a high volume of highly sensitive global network traffic per minute with no margin for error. These companies need to ensure they are not wasting resources processing unnecessary requests. When a credit card is run, the processor has to look up the account to ensure it exists, then query the balance to ensure the amount requested is available. While an individual transaction is cheap and small, the scales involved are enormous.

When it comes to programming, each language that I have encountered comes with its unique quirks and virtues. I wanted to compare the difference in syntax of several common languages by writing a credit card checker. The goal of this application is to accept an input of a credit card number and then to identify if a credit card number is syntactically valid. This post is primarily a comparison between a lower level language vs a higher level language. If you would like to see how I implemented the credit card checker, check out my code in C, Ruby, or Javascript here.

This is image title

Credit Card Payment Method

Most of us have encountered this screen when trying to make a payment for an online purchase. Usually at the front end, Javascript would handle the validation to check if the credit card is a valid card before a call is sent to the servers. The process of validation checking is based on a checksum algorithm created by Hans Peter Luhn. Here’s a simple break down on Luhn’s algorithm.

Luhn’s checksum algorithm

Multiply every other digit by 2, starting with the number’s second-to-last digit, and then add those products’ digits together.

Add the sum to the sum of the digits that weren’t multiplied by 2.

If the total’s last digit is 0 then the number is valid!

Take for example the following American Express number, 378734493671000. Starting from the second-to-last-digit, multiply the last number by 2.

72 + 72 + 42 + 92 + 62 + 12 + 0*2

The result:

14 + 14 + 8 + 18 +12 + 2 + 0

Adding the product digits:

1 + 4 + 1 + 4 + 8 + 1 + 8 +1 + 2 + 2 + 0 = 32

Finally add the digits that were not multiplied to the sum

32 + 3 + 8 + 3 + 4 + 3 + 7 + 0 + 0 = 60

The checksum 60 ends with the number 0, therefore it a syntactically sound credit card number

Identifying Credit Card Types

Aside from the checksum, credit card number numbers also identify the type of credit card company. Visa cards start with the number 4. MasterCards start with the number 51, 52, 53, 54, or 55. American Express starts with the number 34 or 37.

The Big Picture

The solution can be broken down into two parts:

  1. Check if the card number is valid.
  2. Identify the type of credit card.

Let’s take a look at the syntax for C and walkthrough the code.

Card Length Validation

To check if the card number is valid, there is a preliminary check that we can check for before calculating the checksum. We know that a credit card number can only be either be 13, 15, or 16. We can do that with a simple while or for loop.

The user’s card number is stored in the variable card number and on each iteration of the length of the number the last digit will be removed and counted. The count of the length of the number will then be checked with if it is either 13, 15, or 16 digits.

Javascript and Ruby both have higher order functions that simplify the process of determining the length of the variable. Essentially, under the hood of the method or function length a similar process is being utilized.

Checksum Validation

After passing the first test, the next step would be to see if the checksum is valid. Again we, will take a look at the syntax in C.

In this example, the array number is declared and the card numbers are enumerated through and each digit is saved in the array number. The digits that are stored in the array using this method is reversed from the original card number because of the operation of removing the last digit first and storing it in the first index.

Take for example the credit card number is 4012 8888 8888 1881. Using the modulo and divide by 10 method to store the array the resulting array would be [1,8,8,1,8,8,8,8,8,8,8,8,2,1,0,4].

In Javascript, if using a higher order function to convert a number, the number needs to be converted to a string first before using the higher function to convert to an array.

It can be noted that this concept of converting to a specific data type is also similar when using Ruby methods to convert to an array. Ruby and Javascript are similar in that they both require the data to be a certain type and often require coercing the data into a usable type before a higher order function can be used to operate on the data type. You can notice that in my example of Javascript (above)and Ruby (below) that the integers is converted into a string and created into an array and then mapped back into integers.

The checksum for C was cleaner to implement for this reason that there were less type conversions needed to manipulate the data. The array used simple for loops and conditional statements to validate the number.

In the C code above, a new array was created to clone the number array and starting from the second-to-last value the value was multiplied by 2.

The bulk of the validation occurs in this nested if statement. First the length of the card number is determined. Then the arrays digits are added up and checked if the checksum is valid in this one line.

sumdigit = (number[i] % 10) + (number[i]/10 % 10);

The type of card is validated by checking the first and second index of the array. In this example, Visa cards start with the number 4.

cardarray[12] == 4 && accumulator % 10 == 0

Implementing the validator in ruby and javascript by using higher order functions made it fairly verbose. It would definitely be possible to use the same algorithm for the C program in Javascript and Ruby. However, I wanted to utilize the higher order functions of the language.

To split the array in C, the card number was cloned and multiplied using two for loops. In Javascript, I found that I could use the filter function to separate the initial array into two arrays of every other digits and then a simple map over the first array to double the digits.

The array that has been multiplied by 2 is then summed and added into the array that has not being multiplied using the reduce method. If the checksum passes, then the first two digits of the card array is sliced to check what type of card. The implementation of the card type check is similar to the C syntax. By using a conditional statement the digits of the card array is then evaluated for each type.

Concluding Thoughts

This post is more of a self reflection on the differences in programming in a lower level language and a higher level language. In attempting to create a credit card checker, I have found that the lower level language syntax to be more concise in getting to the solution, whereas using the higher level language requires data type conversions to use the higher order functions.

I’m still learning more about code every day. I would love to hear from you, if you have any tips or suggestions.

Thank you !

How to fix “exception in thread main java.lang.stringindexoutofboundsexception string index out of range” errors

I'm trying to recreate the game of mastermind except a simplified version. I've been stuck on this one error and I don't know how to fix it.

I'm trying to recreate the game of mastermind except a simplified version. I've been stuck on this one error and I don't know how to fix it.

String combo = "";
int plus = 0;
String in = "";
int right = 0;
int numbers = 6;
int length = 4;
int tries = 10;
int[] guessNums = new int[numbers];
int[] comboNums = new int[numbers];

rules();
combo = guesses();

for(int i = 0; i < length; i++)

{
if(in.charAt(i) == combo.charAt(i))
{
right++;
}
guessNums[in.charAt(i)-49]++;
comboNums[combo.charAt(i)-49]++;
}

for(int i = 0; i < numbers; i++)
{
while(comboNums[i] > 0 && guessNums[i] > 0)
{
plus++;
comboNums[i]--;
guessNums[i]--;
}
}
String reset = "\u001B[0m";
System.out.print("\t");
String a = "" + right;
printRed(a);
System.out.println(" " + reset + "" + plus);
System.out.println("\n");

tries--;
return(right == length);


it says the error is in line if(in.charAt(i) == combo.charAt(i)).

How to Write Python C Extension Modules using the Python API

How to Write Python C Extension Modules using the Python API

There are several ways in which you can extend the functionality of Python. One of these is to write your Python module in C or C++. In this tutorial, you’ll discover how to use the Python API to write Python C extension modules.

You’ll learn how to:

  • Invoke C functions from within Python
  • Pass arguments from Python to C and parse them accordingly
  • Raise exceptions from C code and create custom Python exceptions in C
  • Define global constants in C and make them accessible in Python
  • Test, package, and distribute your Python C extension module

Table of Contents

  • Extending Your Python Program
  • Writing a Python Interface in C
    • Understanding fputs()
    • Writing the C Function for fputs()
    • Wrapping fputs()
    • Writing the Init Function
    • Putting It All Together
  • Packaging Your Python C Extension Module
    • Building Your Module
    • Running Your Module
  • Raising Exceptions
    • Raising Exceptions From C Code
    • Raising Custom Exceptions
  • Defining Constants
  • Testing Your Module
  • Considering Alternatives
  • Conclusion
Extending Your Python Program

One of the lesser-known yet incredibly powerful features of Python is its ability to call functions and libraries defined in compiled languages such as C or C++. This allows you to extend the capabilities of your program beyond what Python’s built-in features have to offer.

There are many languages you could choose from to extend the functionality of Python. So, why should you use C? Here are a few reasons why you might decide to build a Python C extension module:

  1. To implement new built-in object types: It’s possible to write a Python class in C, and then instantiate and extend that class from Python itself. There can be many reasons for doing this, but more often than not, performance is primarily what drives developers to turn to C. Such a situation is rare, but it’s good to know the extent to which Python can be extended.

  2. To call C library functions and system calls: Many programming languages provide interfaces to the most commonly used system calls. Still, there may be other lesser-used system calls that are only accessible through C. The os module in Python is one example.

This is not an exhaustive list, but it gives you the gist of what can be done when extending Python using C or any other language.

To write Python modules in C, you’ll need to use the Python API, which defines the various functions, macros, and variables that allow the Python interpreter to call your C code. All of these tools and more are collectively bundled in the Python.h header file.

Writing a Python Interface in C

In this tutorial, you’ll write a small wrapper for a C library function, which you’ll then invoke from within Python. Implementing a wrapper yourself will give you a better idea about when and how to use C to extend your Python module.

Understanding fputs()

fputs() is the C library function that you’ll be wrapping:

int fputs(const char *, FILE *)

This function takes two arguments:

  1. const char * is an array of characters.
  2. FILE * is a file stream pointer.

fputs() writes the character array to the file specified by the file stream and returns a non-negative value. If the operation is successful, then this value will denote the number of bytes written to the file. If there’s an error, then it returns EOF. You can read more about this C library function and its other variants in the manual page entry.

Writing the C Function for fputs()

This is a basic C program that uses fputs() to write a string to a file stream:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int main() {
    FILE *fp = fopen("write.txt", "w");
    fputs("Real Python!", fp);
    fclose(fp);
    return 1;
}

This snippet of code can be summarized as follows:

  1. Open the file write.txt.
  2. Write the string "Real Python!" to the file.

Note: The C code in this article should build on most systems. It has been tested on GCC without using any special flags.

In the following section, you’ll write a wrapper for this C function.

Wrapping fputs()

It might seem a little weird to see the full code before an explanation of how it works. However, taking a moment to inspect the final product will supplement your understanding in the following sections. The code block below shows the final wrapped version of your C code:

static PyObject *method_fputs(PyObject *self, PyObject *args) {

    char *str, *filename = NULL;

    int bytes_copied = -1;


    /* Parse arguments */

    if(!PyArg_ParseTuple(args, "ss", &str, &filename)) {

        return NULL;

    }


    FILE *fp = fopen(filename, "w");

    bytes_copied = fputs(str, fp);

    fclose(fp);


    return PyLong_FromLong(bytes_copied);

}

This code snippet references three object structures:

  1. PyObject
  2. PyArg_ParseTuple()
  3. PyLong_FromLong()

These are used for data type definition for the Python language. You’ll go through each of them now.

PyObject

PyObject is an object structure that you use to define object types for Python. All Python objects share a small number of fields that are defined using the PyObject structure. All other object types are extensions of this type.

PyObject tells the Python interpreter to treat a pointer to an object as an object. For instance, setting the return type of the above function as PyObject defines the common fields that are required by the Python interpreter in order to recognize this as a valid Python type.

Take another look at the first few lines of your C code:

static PyObject *method_fputs(PyObject *self, PyObject *args) {

    char *str, *filename = NULL;

    int bytes_copied = -1;


    /* Snip */

In line 2, you declare the argument types you wish to receive from your Python code:

  1. char *str is the string you want to write to the file stream.
  2. char *filename is the name of the file to write to.

PyArg_ParseTuple()

PyArg_ParseTuple() parses the arguments you’ll receive from your Python program into local variables:

static PyObject *method_fputs(PyObject *self, PyObject *args) {

    char *str, *filename = NULL;

    int bytes_copied = -1;


    /* Parse arguments */

    if(!PyArg_ParseTuple(args, "ss", &str, &filename)) {

        return NULL;

    }


    /* Snip */

If you look at line 6, then you’ll see that PyArg_ParseTuple() takes the following arguments:

  • args are of type PyObject.

  • "ss" is the format specifier that specifies the data type of the arguments to parse. (You can check out the official documentation for a complete reference.)

  • &str and &filename are pointers to local variables to which the parsed values will be assigned.

PyArg_ParseTuple() evaluates to false on failure. If it fails, then the function will return NULL and not proceed any further.

fputs()

As you’ve seen before, fputs() takes two arguments, one of which is the FILE * object. Since you can’t parse a Python textIOwrapper object using the Python API in C, you’ll have to use a workaround:

static PyObject *method_fputs(PyObject *self, PyObject *args) {

    char *str, *filename = NULL;

    int bytes_copied = -1;


    /* Parse arguments */

    if(!PyArg_ParseTuple(args, "ss", &str, &filename)) {

        return NULL;

    }


    FILE *fp = fopen(filename, "w");

    bytes_copied = fputs(str, fp);

    fclose(fp);


    return PyLong_FromLong(bytes_copied);

}

Here’s a breakdown of what this code does:

  • In line 10, you’re passing the name of the file that you’ll use to create a FILE * object and pass it on to the function.
  • In line 11, you call fputs() with the following arguments:
    • str is the string you want to write to the file.
    • fp is the FILE * object you defined in line 10.

You then store the return value of fputs() in bytes_copied. This integer variable will be returned to the fputs() invocation within the Python interpreter.

PyLong_FromLong(bytes_copied)

PyLong_FromLong() returns a PyLongObject, which represents an integer object in Python. You can find it at the very end of your C code:

static PyObject *method_fputs(PyObject *self, PyObject *args) {

    char *str, *filename = NULL;

    int bytes_copied = -1;


    /* Parse arguments */

    if(!PyArg_ParseTuple(args, "ss", &str, &filename)) {

        return NULL;

    }


    FILE *fp = fopen(filename, "w");

    bytes_copied = fputs(str, fp);

    fclose(fp);


    return PyLong_FromLong(bytes_copied);

}

Line 14 generates a PyLongObject for bytes_copied, the variable to be returned when the function is invoked in Python. You must return a PyObject* from your Python C extension module back to the Python interpreter.

Writing the Init Function

You’ve written the code that makes up the core functionality of your Python C extension module. However, there are still a few extra functions that are necessary to get your module up and running. You’ll need to write definitions of your module and the methods it contains, like so:

static PyMethodDef FputsMethods[] = {
    {"fputs", method_fputs, METH_VARARGS, "Python interface for fputs C library function"},
    {NULL, NULL, 0, NULL}
};


static struct PyModuleDef fputsmodule = {
    PyModuleDef_HEAD_INIT,
    "fputs",
    "Python interface for the fputs C library function",
    -1,
    FputsMethods
};

These functions include meta information about your module that will be used by the Python interpreter. Let’s go through each of the structs above to see how they work.

PyMethodDef

In order to call the methods defined in your module, you’ll need to tell the Python interpreter about them first. To do this, you can use PyMethodDef. This is a structure with 4 members representing a single method in your module.

Ideally, there will be more than one method in your Python C extension module that you want to be callable from the Python interpreter. This is why you need to define an array of PyMethodDef structs:

static PyMethodDef FputsMethods[] = {
    {"fputs", method_fputs, METH_VARARGS, "Python interface for fputs C library function"},
    {NULL, NULL, 0, NULL}
};

Each individual member of the struct holds the following info:

  • "fputs" is the name the user would write to invoke this particular function.

  • method_fputs is the name of the C function to invoke.

  • METH_VARARGS is a flag that tells the interpreter that the function will accept two arguments of type PyObject*:

    1. self is the module object.
    2. args is a tuple containing the actual arguments to your function. As explained previously, these arguments are unpacked using PyArg_ParseTuple().
  • The final string is a value to represent the method docstring.

PyModuleDef

Just as PyMethodDef holds information about the methods in your Python C extension module, the PyModuleDef struct holds information about your module itself. It is not an array of structures, but rather a single structure that’s used for module definition:

static struct PyModuleDef fputsmodule = {
    PyModuleDef_HEAD_INIT,
    "fputs",
    "Python interface for the fputs C library function",
    -1,
    FputsMethods
};

There are a total of 9 members in this struct, but not all of them are required. In the code block above, you initialize the following five:

  1. PyModuleDef_HEAD_INIT is a member of type PyModuleDef_Base, which is advised to have just this one value.

  2. "fputs" is the name of your Python C extension module.

  3. The string is the value that represents your module docstring. You can use NULL to have no docstring, or you can specify a docstring by passing a const char * as shown in the snippet above. It is of type Py_ssize_t. You can also use PyDoc_STRVAR() to define a docstring for your module.

  4. -1 is the amount of memory needed to store your program state. It’s helpful when your module is used in multiple sub-interpreters, and it can have the following values:

    • A negative value indicates that this module doesn’t have support for sub-interpreters.
    • A non-negative value enables the re-initialization of your module. It also specifies the memory requirement of your module to be allocated on each sub-interpreter session.
  5. FputsMethods is the reference to your method table. This is the array of PyMethodDef structs you defined earlier.

For more information, check out the official Python documentation on PyModuleDef.

PyMODINIT_FUNC

Now that you’ve defined your Python C extension module and method structures, it’s time to put them to use. When a Python program imports your module for the first time, it will call PyInit_fputs():

PyMODINIT_FUNC PyInit_fputs(void) {
    return PyModule_Create(&fputsmodule);
}

PyMODINIT_FUNC does 3 things implicitly when stated as the function return type:

  1. It implicitly sets the return type of the function as PyObject*.
  2. It declares any special linkages.
  3. It declares the function as extern “C.” In case you’re using C++, it tells the C++ compiler not to do name-mangling on the symbols.

PyModule_Create() will return a new module object of type PyObject *. For the argument, you’ll pass the address of the method structure that you’ve already defined previously, fputsmodule.

Note: In Python 3, your init function must return a PyObject * type. However, if you’re using Python 2, then PyMODINIT_FUNC declares the function return type as void.

Putting It All Together

Now that you’ve written the necessary parts of your Python C extension module, let’s take a step back to see how it all fits together. The following diagram shows the components of your module and how they interact with the Python interpreter:

Python C API Communication

When you import your Python C extension module, PyInit_fputs() is the first method to be invoked. However, before a reference is returned to the Python interpreter, the function makes a subsequent call to PyModule_Create(). This will initialize the structures PyModuleDef and PyMethodDef, which hold meta information about your module. It makes sense to have them ready since you’ll make use of them in your init function.

Once this is complete, a reference to the module object is finally returned to the Python interpreter. The following diagram shows the internal flow of your module:

Python C API Module API

The module object returned by PyModule_Create() has a reference to the module structure PyModuleDef, which in turn has a reference to the method table PyMethodDef. When you call a method defined in your Python C extension module, the Python interpreter uses the module object and all of the references it carries to execute the specific method. (While this isn’t exactly how the Python interpreter handles things under the hood, it’ll give you an idea of how it works.)

Similarly, you can access various other methods and properties of your module, such as the module docstring or the method docstring. These are defined inside their respective structures.

Now you have an idea of what happens when you call fputs() from the Python interpreter. The interpreter uses your module object as well as the module and method references to invoke the method. Finally, let’s take a look at how the interpreter handles the actual execution of your Python C extension module:

Python C API fputs Function Flow

Once method_fputs() is invoked, the program executes the following steps:

  1. Parse the arguments you passed from the Python interpreter with PyArg_ParseTuple()
  2. Pass these arguments to fputs(), the C library function that forms the crux of your module
  3. Use PyLong_FromLong to return the value from fputs()

To see these same steps in code, take a look at method_fputs() again:

static PyObject *method_fputs(PyObject *self, PyObject *args) {

    char *str, *filename = NULL;

    int bytes_copied = -1;


    /* Parse arguments */

    if(!PyArg_ParseTuple(args, "ss", &str, &filename)) {

        return NULL;

    }


    FILE *fp = fopen(filename, "w");

    bytes_copied = fputs(str, fp);

    fclose(fp);


    return PyLong_FromLong(bytes_copied);

}

To recap, your method will parse the arguments passed to your module, send them on to fputs(), and return the results.

Packaging Your Python C Extension Module

Before you can import your new module, you first need to build it. You can do this by using the Python package distutils.

You’ll need a file called setup.py to install your application. For this tutorial, you’ll be focusing on the part specific to the Python C extension module.

A minimal setup.py file for your module should look like this:

from distutils.core import setup, Extension

def main():
    setup(name="fputs",
          version="1.0.0",
          description="Python interface for the fputs C library function",
          author="<your name>",
          author_email="[email protected]",
          ext_modules=[Extension("fputs", ["fputsmodule.c"])])

if __name__ == "__main__":
    main()

The code block above shows the standard arguments that are passed to setup(). Take a closer look at the last positional argument, ext_modules. This takes a list of objects of the Extensions class. An object of the Extensions class describes a single C or C++ extension module in a setup script. Here, you pass two keyword arguments to its constructor, namely:

  • name is the name of the module.
  • [filename] is a list of paths to files with the source code, relative to the setup script.
[Remove ads](/account/join/)

Building Your Module

Now that you have your setup.py file, you can use it to build your Python C extension module. It’s strongly advised that you use a virtual environment to avoid conflicts with your Python environment.

Navigate to the directory containing setup.py and run the following command:

$ python3 setup.py install

This command will compile and install your Python C extension module in the current directory. If there are any errors or warnings, then your program will throw them now. Make sure you fix these before you try to import your module.

By default, the Python interpreter uses clang for compiling the C code. If you want to use gcc or any other C compiler for the job, then you need to set the CC environment variable accordingly, either inside the setup script or directly on the command line. For instance, you can tell the Python interpreter to use gcc to compile and build your module this way:

$ CC=gcc python3 setup.py install

However, the Python interpreter will automatically fall back to gcc if clang is not available.

Running Your Module

Now that everything is in place, it’s time to see your module in action! Once it’s successfully built, fire up the interpreter to test run your Python C extension module:

>>> import fputs
>>> fputs.__doc__
'Python interface for the fputs C library function'
>>> fputs.__name__
'fputs'
>>> # Write to an empty file named `write.txt`
>>> fputs.fputs("Real Python!", "write.txt")
13
>>> with open("write.txt", "r") as f:
>>>     print(f.read())
'Real Python!'

Your function performs as expected! You pass a string "Real Python!" and a file to write this string to, write.txt. The call to fputs() returns the number of bytes written to the file. You can verify this by printing the contents of the file.

Also recall how you passed certain arguments to the PyModuleDef and PyMethodDef structures. You can see from this output that Python has used these structures to assign things like the function name and docstring.

With that, you have a basic version of your module ready, but there’s a lot more that you can do! You can improve your module by adding things like custom exceptions and constants.

Raising Exceptions

Python exceptions are very different from C++ exceptions. If you want to raise Python exceptions from your C extension module, then you can use the Python API to do so. Some of the functions provided by the Python API for exception raising are as follows:

 How to Write Python C Extension Modules using the Python API

You can use any of these to raise an exception. However, which to use and when depends entirely on your requirements. The Python API has all the standard exceptions pre-defined as PyObject types.

Raising Exceptions From C Code

While you can’t raise exceptions in C, the Python API will allow you to raise exceptions from your Python C extension module. Let’s test this functionality by adding PyErr_SetString() to your code. This will raise an exception whenever the length of the string to be written is less than 10 characters:

static PyObject *method_fputs(PyObject *self, PyObject *args) {

    char *str, *filename = NULL;

    int bytes_copied = -1;


    /* Parse arguments */

    if(!PyArg_ParseTuple(args, "ss", &str, &fd)) {

        return NULL;

    }


    if (strlen(str) < 10) {

        PyErr_SetString(PyExc_ValueError, "String length must be greater than 10");

        return NULL;

    }


    fp = fopen(filename, "w");

    bytes_copied = fputs(str, fp);

    fclose(fp);


    return PyLong_FromLong(bytes_copied);

}

Here, you check the length of the input string immediately after you parse the arguments and before you call fputs(). If the string passed by the user is shorter than 10 characters, then your program will raise a ValueError with a custom message. The program execution stops as soon as the exception occurs.

Note how method_fputs() returns NULL after raising the exception. This is because whenever you raise an exception using PyErr_*(), it automatically sets an internal entry in the exception table and returns it. The calling function is not required to subsequently set the entry again. For this reason, the calling function returns a value that indicates failure, usually NULL or -1. (This should also explain why there was a need to return NULL when you parse arguments in method_fputs() using PyArg_ParseTuple().)

Raising Custom Exceptions

You can also raise custom exceptions in your Python C extension module. However, things are a bit different. Previously, in PyMODINIT_FUNC, you were simply returning the instance returned by PyModule_Create and calling it a day. But for your custom exception to be accessible by the user of your module, you need to add your custom exception to your module instance before you return it:

static PyObject *StringTooShortError = NULL;

PyMODINIT_FUNC PyInit_fputs(void) {
    /* Assign module value */
    PyObject *module = PyModule_Create(&fputsmodule);

    /* Initialize new exception object */
    StringTooShortError = PyErr_NewException("fputs.StringTooShortError", NULL, NULL);

    /* Add exception object to your module */
    PyModule_AddObject(module, "StringTooShortError", StringTooShortError);

    return module;
}

As before, you start off by creating a module object. Then you create a new exception object using PyErr_NewException. This takes a string of the form module.classname as the name of the exception class that you wish to create. Choose something descriptive to make it easier for the user to interpret what has actually gone wrong.

Next, you add this to your module object using PyModule_AddObject. This takes your module object, the name of the new object being added, and the custom exception object itself as arguments. Finally, you return your module object.

Now that you’ve defined a custom exception for your module to raise, you need to update method_fputs() so that it raises the appropriate exception:

static PyObject *method_fputs(PyObject *self, PyObject *args) {

    char *str, *filename = NULL;

    int bytes_copied = -1;


    /* Parse arguments */

    if(!PyArg_ParseTuple(args, "ss", &str, &fd)) {

        return NULL;

    }


    if (strlen(str) < 10) {

        /* Passing custom exception */

        PyErr_SetString(StringTooShortError, "String length must be greater than 10");

        return NULL;

    }


    fp = fopen(filename, "w");

    bytes_copied = fputs(str, fp);

    fclose(fp);


    return PyLong_FromLong(bytes_copied);

}

After building the module with the new changes, you can test that your custom exception is working as expected by trying to write a string that is less than 10 characters in length:

>>> import fputs
>>> # Custom exception
>>> fputs.fputs("RP!", fp.fileno())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
fputs.StringTooShortError: String length must be greater than 10

When you try to write a string with fewer than 10 characters, your custom exception is raised with a message explaining what went wrong.

Defining Constants

There are cases where you’ll want to use or define constants in your Python C extension module. This is quite similar to how you defined custom exceptions in the previous section. You can define a new constant and add it to your module instance using PyModule_AddIntConstant():

PyMODINIT_FUNC PyInit_fputs(void) {
    /* Assign module value */
    PyObject *module = PyModule_Create(&fputsmodule);

    /* Add int constant by name */
    PyModule_AddIntConstant(module, "FPUTS_FLAG", 64);

    /* Define int macro */
    #define FPUTS_MACRO 256

    /* Add macro to module */
    PyModule_AddIntMacro(module, FPUTS_MACRO);

    return module;
}

This Python API function takes the following arguments:

  • The instance of your module
  • The name of the constant
  • The value of the constant

You can do the same for macros using PyModule_AddIntMacro():

PyMODINIT_FUNC PyInit_fputs(void) {
    /* Assign module value */
    PyObject *module = PyModule_Create(&fputsmodule);

    /* Add int constant by name */
    PyModule_AddIntConstant(module, "FPUTS_FLAG", 64);

    /* Define int macro */
    #define FPUTS_MACRO 256

    /* Add macro to module */
    PyModule_AddIntMacro(module, FPUTS_MACRO);

    return module;
}

This function takes the following arguments:

  • The instance of your module
  • The name of the macro that has already been defined

Note: If you want to add string constants or macros to your module, then you can use PyModule_AddStringConstant() and PyModule_AddStringMacro(), respectively.

Open up the Python interpreter to see if your constants and macros are working as expected:

>>> import fputs
>>> # Constants
>>> fputs.FPUTS_FLAG
64
>>> fputs.FPUTS_MACRO
256

Here, you can see that the constants are accessible from within the Python interpreter.

[Remove ads](/account/join/)
Testing Your Module

You can test your Python C extension module just as you would any other Python module. This can be demonstrated by writing a small test function for pytest:

import fputs

def test_copy_data():
    content_to_copy = "Real Python!"
    bytes_copied = fputs.fputs(content_to_copy, 'test_write.txt')

    with open('test_write.txt', 'r') as f:
        content_copied = f.read()

    assert content_copied == content_to_copy

In the test script above, you use fputs.fputs() to write the string "Real Python!" to an empty file named test_write.txt. Then, you read in the contents of this file and use an assert statement to compare it to what you had originally written.

You can run this test suite to make sure your module is working as expected:

$ pytest -q
test_fputs.py                                                 [100%]
1 passed in 0.03 seconds
Considering Alternatives

In this tutorial, you’ve built an interface for a C library function to understand how to write Python C extension modules. However, there are times when all you need to do is invoke some system calls or a few C library functions, and you want to avoid the overhead of writing two different languages. In these cases, you can use Python libraries such as ctypes or cffi.

These are Foreign Function libraries for Python that provide access to C library functions and data types. Though the community itself is divided as to which library is best, both have their benefits and drawbacks. In other words, either would make a good choice for any given project, but there are a few things to keep in mind when you need to decide between the two:

  • The ctypes library comes included in the Python standard library. This is very important if you want to avoid external dependencies. It allows you to write wrappers for other languages in Python.

  • The cffi library is not yet included in the standard library. This might be a dealbreaker for your particular project. In general, it’s more Pythonic in nature, but it doesn’t handle preprocessing for you.

For more information on these libraries, check out Extending Python With C Libraries and the “ctypes” Module and Interfacing Python and C: The CFFI Module.

Note: Apart from ctypes and cffi, there are various other tools available. For instance, you can also use swig and boost::Py.

Conclusion

In this tutorial, you’ve learned how to write a Python interface in the C programming language using the Python API. You wrote a Python wrapper for the fputs() C library function. You also added custom exceptions and constants to your module before building and testing it.

The Python API provides a host of features for writing complex Python interfaces in the C programming language. At the same time, libraries such as cffi or ctypes can lower the amount of overhead involved in writing Python C extension modules. Make sure you weigh all the factors before making a decision!

Microservices: Implementing the Outbox Pattern

Microservices: Implementing the Outbox Pattern

Implementing the Outbox Pattern - The Problem Statement, The OutBox Pattern, Outbox Pattern With Kafka Connect, Custom Debezium ...

The Problem Statement

Microservices often publish events after performing a database transaction. Writing to the database and publishing an event are two different transactions and they have to be atomic. A failure to publish an event can mean critical failure to the business process.

To explain the problem statement better, let’s consider a Student microservice that helps Enroll the student. After enrollment, the "Course Catalog" service, emails the student all the available courses. Assuming an Event-Driven application, the Student microservice enrolls the student by inserting a record in the database and publishes an event stating that the enrollment for the student is complete. The "Course Catalog" service listens to this event and performs its actions. In a failure scenario, if the Student microservice goes down after inserting a record, the system would be left in an inconsistent state.

Image 1. Failure while publishing an event after updating/inserting the database

The OutBox Pattern

This pattern provides an effective solution to publish events reliably. The idea of this approach is to have an “Outbox” table in the service’s database. When receiving a request for enrollment, not only an insert into the Student table is done, but a record representing the event is also inserted into the Outbox table. The two database actions are done as part of the same transaction.

An asynchronous process monitors the Outbox table for new entries and if there are any, it publishes the events to the Event Bus. The pattern merely splits the two transactions over different services, increasing reliability.

Image 2. Two separate transactions using the outbox pattern.

A description of this pattern can be found on Chris Richardson’s excellent microservices.io site. As described on the site there are two approaches to implementing the Outbox pattern (Transaction log tailing and Polling publisher). We will be using the log tailing approach in the solution below.

Transaction log tailing can be implemented in a very elegant and efficient way using Change Data Capture (CDC) with Debezium and Kafka-Connect.

Outbox Pattern With Kafka Connect

Solution Design

The Student microservice exposes endpoints to perform database operations on the domain. The microservice uses a Postgres database, which houses two tables “Student” and “Outbox”. The transactional operations, modify/insert into the “Student” table and adds a record in the “Outbox” table.

The Kafka-Connect framework runs as a separate service besides the Kafka broker. The Debezium connector for Postgres is deployed on the Kafka-Connect runtime, to capture the changes on the database. In our example, a custom connector is also deployed within Kafka-Connect to help identify the right Kafka topics for an event.

The Debezium connector tails the database transaction logs (write-ahead log) from the ‘‘Outbox’’ table and publishes an event to the topics defined by the custom connector.

Image 3. Solution Design

This solution guarantees at-least-once delivery, since Kafka Connect services ensure that each connector is always running; however, there is a chance the solution can publish the same event multiple times between connectors going down and starting up. To ensure exactly-once delivery, the consuming client must be Idempotent, making sure the duplicate events aren’t processed again.

Understanding the Code

You can find the code here. I would encourage you to read through the story – since I have walked through some key implementation details and the limitations of this pattern.

Student Microservice

This is a simple Spring-Boot microservice, which exposes three endpoints via the REST controller and uses Spring-JPA for database actions. The endpoints exposed are a GET for fetching student information, POST for creating or enrolling a student and a PUT for updating the student email address. The POST and the PUT generate events ‘Student Enrolled’ and ‘Student Email Changed’. The change to invoke the Database actions and inserting the event is handled in the Service class.

@Transactional
    public StudentDTO enrollStudent(EnrollStudentDTO student) 
      throws Exception {
        log.info("Enroll Student details for StudentId: {}", 
                 student.getName());

        StudentEntity studentEntity = StudentMapper.
          INSTANCE.studentDTOToEntity(student);
        studentRepository.save(studentEntity);

        //Publish the event
        event.fire(EventUtils.createEnrollEvent(studentEntity));

        return StudentMapper.INSTANCE.studentEntityToDTO(studentEntity);
    }

    ...

    public static OutboxEvent createEnrollEvent(StudentEntity studentEntity) 
    {
      ObjectMapper mapper = new ObjectMapper();
      JsonNode jsonNode = mapper.convertValue(studentEntity, JsonNode.class);

      return new OutboxEvent(
              studentEntity.getStudentId(),
              "STUDENT_ENROLLED",
              jsonNode
      );
    }

The method needs the Transactional annotation so that database action and the event write is bound by a single transaction. The enrollStudent() creates a new record on the Student table and then fires an event using Spring’s [ApplicationEventPublisherAware](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/ApplicationEventPublisherAware.html) support. The method createEnrollEvent(), helps build the data to be inserted into the OutBox. Inserting the event into the ‘Outbox’ table is handled in the EventService class which uses a Spring-JPA Repository to handle the database interactions.

@EventListener
    public void handleOutboxEvent(OutboxEvent event) {

        UUID uuid = UUID.randomUUID();
        OutBoxEntity entity = new OutBoxEntity(
                uuid,
                event.getAggregateId(),
                event.getEventType(),
                event.getPayload().toString(),
                new Date()
        );

        log.info("Handling event : {}.", entity);

        outBoxRepository.save(entity);

        /*
         * Delete the event once written, so that the outbox doesn't grow.
         * The CDC eventing polls the database log entry and not the table in the database.
         */
        outBoxRepository.delete(entity);
    }

A key thing to note here is the code deletes the record on the ‘Outbox’ Table once it has been written so that the outbox table doesn’t grow. Also, Debezium doesn’t examine the actual contents of the database table, but instead it tails the write-ahead transaction log. The calls to save() and delete() will make a CREATE and a DELETE entry in the log, once the transaction commits. The Kafka-Connect custom transformer can be programmed not to perform any action on the DELETE entry.

Custom Debezium Transformer

This component determines the Kafka topic to which the event needs to be published. This is done by using the EVENT_TYPE column of the payload from the ‘Outbox’ table. The component is built as a JAR and will be placed in the Kafka-Connect runtime. The setup of placing the JAR in the Kafka-Connect runtime is handled by the DockerFile.

FROM debezium/connect
ENV DEBEZIUM_DIR=$KAFKA_CONNECT_PLUGINS_DIR/debezium-transformer

RUN mkdir $DEBEZIUM_DIR
COPY target/custom-debezium-transformer-0.0.1.jar $DEBEZIUM_DIR

We use the image debezium/connect, since it comes preloaded with all available connectors. The component consists of just one class that helps determine the topic before the message is published.

public class CustomTransformation<R extends ConnectRecord<R>> implements Transformation<R> {

    /**
     * This method is invoked when a change is made on the outbox schema.
     *
     * @param sourceRecord
     * @return
     */
    public R apply(R sourceRecord) {

        Struct kStruct = (Struct) sourceRecord.value();
        String databaseOperation = kStruct.getString("op");

        //Handle only the Create's
        if ("c".equalsIgnoreCase(databaseOperation)) {

            // Get the details.
            Struct after = (Struct) kStruct.get("after");
            String UUID = after.getString("uuid");
            String payload = after.getString("payload");
            String eventType = after.getString("event_type").toLowerCase();
            String topic = eventType.toLowerCase();

            Headers headers = sourceRecord.headers();
            headers.addString("eventId", UUID);

            // Build the event to be published.
            sourceRecord = sourceRecord.newRecord(topic, null, Schema.STRING_SCHEMA, UUID,
                    null, payload, sourceRecord.timestamp(), headers);
        }

        return sourceRecord;
    }

The transformer extends the Kafka-Connect Transformation class. The [apply()](https://docs.spring.io/spring-kafka/api/org/springframework/kafka/core/KafkaTemplate.html) method, filters the CREATE operation (‘c’) skipping the DELETE, as explained above.

For every CREATE the topic name is identified and the payload is returned. For simplicity in this example, the topic name is the lowercase value of the EVENT_TYPE column, inserted into the “Outbox” table by the Student Microservice.

Installation of the Needed Images and Frameworks

The guide assumes the user has docker pre-installed. Creating the Debezium Connect Image is done by triggering a maven build on the custom-debezium-connect project and building the docker image.

mvn clean install
docker build -t custom-debezium-connect .

Running the Docker Compose under the project folder installs all the pre-requisites: Zookeeper, Kafka, Postgres, and Kafka-Connect. The Docker Compose file:

version: "3.5"

services:
  # Install postgres and setup the student database.
  postgres:
    container_name: postgres
    image: debezium/postgres
    ports:
      - 5432:5432
    environment:
      - POSTGRES_DB=studentdb
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=password

  # Install zookeeper. 
  zookeeper:
    container_name: zookeeper
    image: zookeeper
    ports:
      - 2181:2181

  # Install kafka and create needed topics. 
  kafka:
    container_name: kafka
    image: confluentinc/cp-kafka
    hostname: kafka
    ports:
      - 9092:9092
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_HOST://kafka:29092
      LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_CREATE_TOPICS: "student_email_changed:1:1,student_enrolled:1:1"
    depends_on:
      - zookeeper

  # Install debezium-connect. 
  debezium-connect:
    container_name: custom-debezium-connect
    image: custom-debezium-connect
    hostname: debezium-connect 
    ports:
      - '8083:8083'
    environment:
      GROUP_ID: 1
      CONFIG_STORAGE_TOPIC: debezium_connect_config
      OFFSET_STORAGE_TOPIC: debezium_connect_offsets
      STATUS_STORAGE_TOPIC: debezium_connect_status
      BOOTSTRAP_SERVERS: kafka:29092
    depends_on:
      - kafka
      - postgres

We use the image debezium/postgres, because it comes prebuilt with the logical decoding feature. This is a mechanism that allows the extraction of the changes, that were committed to the transaction log making the CDC possible. The documentation for installing the plugin to Postgres can be found here.

Setting Up the Kafka Topics

Execute the below commands to create the two Kafka topics: “student_enrolled” and “student_email_changed

docker exec -t kafka /usr/bin/kafka-topics \
      --create --bootstrap-server :9092 \
      --topic student_email_changed \
      --partitions 1 \
      --replication-factor 1

docker exec -t kafka /usr/bin/kafka-topics \
      --create --bootstrap-server :9092 \
      --topic student_enrolled \
      --partitions 1 \
      --replication-factor 1
Linking the Debezium Kafka Connect With the Outbox Table

Execute the below curl command to create a connector in the Kafka-Connect server. This connector points to the Postgres installation and also specifies the table and the custom transformer class we built earlier.

curl -X POST \
  http://localhost:8083/connectors/ \
  -H 'content-type: application/json' \
  -d '{
   "name": "student-outbox-connector",
   "config": {
      "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
      "tasks.max": "1",
      "database.hostname": "postgres",
      "database.port": "5432",
      "database.user": "user",
      "database.password": "password",
      "database.dbname": "studentdb",
      "database.server.name": "pg-outbox-server",
      "tombstones.on.delete": "false",
      "table.whitelist": "public.outbox",
      "transforms": "outbox",
      "transforms.outbox.type": "com.sohan.transform.CustomTransformation"
   }
}'

That completes the setup needed, where we have Zookeeper running on port 2181, Kafka running on port 9092 with all the needed topics, Postgres running on port 5432 having a ‘StudentDB’ pre-created and finally the Kafka-Connect with Debezium and our custom transformer running on port 8083.

Running the Solution

Once the Student Microservice isstarted we can see the pattern in action. To simulate a Student enrollment, we can execute the below curl.

curl -X POST \
  'http://localhost:8080/students/~/enroll' \
  -H 'content-type: application/json' \
  -d '{ 
    "name": "Megan Clark",
    "email": "[email protected]",
    "address": "Toronto, ON"
}'

We see that a new student record is inserted into the database for ‘Megan Clark’.

Image 4. Student Enrolled inserted into the database

And we see an event published into the topic student_enrolled, notifying the downstream systems that ‘Megan Clark’ has enrolled.

Image 5. Console consumer to verify data being published to Kafka

To simulate a student updating the email address, we can execute the below curl operation.

$ curl -X PUT \  http://localhost:8080/students/1/update-email \  
-H 'content-type: application/json' \  -d '{    "email": "[email protected]"}'

We can notice that email has been changed into ‘[email protected]

Image 6. Student Email changed in the database

And we see an event published into the topic student_email_changed, notifying the downstream systems that Student with Student-ID ‘1’ has changed his email id.

Image 6. Console consumer to verify data being published to Kafka

If we comment on the line of code that deletes outbox events after writing them in the EventService (outBoxR epository.delete(entity)), we can view the events inserted in the outbox table.

Image 7. Events in the OutBox Table.

Summary

In a microservice architecture, system failure is inevitable. Adapting this architecture style forces us to design for failures. The Outbox Pattern gives us a robust method of reliable messaging in the face of failure.

The above solution makes the implementation of the pattern simple. But to make the system highly available, we must run multiple instances (clusters) of Zookeeper, Apache Kafka, and Kafka Connect.

Finally, I would like to point out this isn’t the only way to tackle the problem of reliable messaging. But it is an invaluable pattern to have at your disposal.

Thank for reading!

PHP variable to JavaScript In Laravel

PHP variable to JavaScript In Laravel

In PHP, there is always a limitation on pass PHP variable to the javascript file. Many PHP developers, face this issue. But Laravel has one package which is directly set the variable in js. So, you can use this package for passing a variable from controller to javascript.

In PHP, there is always a limitation on pass PHP variable to the javascript file. Many PHP developers, face this issue. But Laravel has one package which is directly set the variable in js. So, you can use this package for passing a variable from controller to javascript.

However, we know the Laravel framework is having the largest community. Similarly, Laravel having large package base. So, sometimes it is easy to use package rather use custom code. Moreover, you can use existing packages.

Pass PHP variable to JavaScript

Many times you often find the need to pass PHP variable or string to the javascript file. But for this, you can use the below package.

Install package

Just copy below package and paste into the composer.json file.

"laracasts/utilities": "^2.1"

Or similarly from command line or terminal run command:

composer require laracasts/utilities

Provider

Further, add the Service provider into the config/app.php file. Which is used for the binding. After this, you will get Javascript facade.

'providers' => [
    '...',
    'Laracasts\Utilities\JavaScript\JavaScriptServiceProvider'
];

Publish the Configuration:

Finally, publish the config using the below command.

php artisan vendor:publish --provider="Laracasts\Utilities\JavaScript\JavaScriptServiceProvider"

Further, it will copy the javascript.js file to the config folder. There are two variable you can assign the values. Firstly, you can set bind_js_vars_to_this_view. This will set the name of the view. Basically, this is a partial view. If more than one view then you can set in array. Secondly, js_namespace is where you can define the namespace of the js variable. By default, Laracasts is the variable but you can change it to something else.

Usage in controller

Import the Dependency

You can initialize the JavaScript variable in the controller. Initialize and import into the controller.

use Javascript;

Set Variable

Now, you can define the variable into the controller using Javascript. You can set the values in the key-value pair. Assign a value to the key.

$array = [
    'key' => value_here,
];
 
JavaScript::put($array);

So, put your value into the JavaScript variable using the value of an array. Once you define into the array and assign to the variable you can access it into the js file.

Now, your controller will look like below.

<?php
namespace App\Http\Controllers;
 
use Illuminate\Http\Request;
use JavaScript;
 
class VarController extends Controller
{
 
 
    /**
     * Display a listing of the resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function index()
    {
        \JavaScript::put([
            'foo' => 'bar',
            'user' => 'user',
            'time' => '2019-04-29'
        ]);
        return view('pages/home');
    }
}

Access In JS file

Now, your blade file will look like below.

@extends('layouts.default')
@section('content')
    <div style="text-align:center">
        <h1>PHP var to JavaScript</h1>
         
    </div>
     
@stop
@section('scripts')
<script src='assets/app.js'></script>
@stop

In js file, you can access the set variable into the Laracasts namespace. Which you define in the javascript.js file.

console.log(Laracasts.foo); // access foo variable
console.log(Laracasts.user);// access user variable
console.log(Laracasts.time);// access time variable

In this way, you can directly access the set variable. This will give simple access for getting value into the js file. There is a limitation for getting the value into the js file. So this package will give you direct access into js file.

In many cases, this will give you the simplest solution. Many developers have to face this problem in developement. So, this post will give some relief from this problem. Let me know if you face any issue.

Thanks For Visiting, Keep Visiting. If you liked this post, share it with all of your programming buddies!