# Recent Posts (page 2 / 8)

## Accelerate MATLAB with Python and Numba

Part 11 of the Python Is The Ultimate MATLAB Toolbox series.

Upper panel: flow computed with MATLAB 2022b using code by Jamie Johns.

Lower panel: flow computed with MATLAB 2022b calling the same Navier-Stokes solver translated to Python with Numba. It repeats the simulation more than three times in the amount of time it takes the MATLAB-only solver to do this once.

## Introduction

MATLAB’s number crunching power is impressive. If you’re working with a compute-bound application though, you probably want it to run even faster. After you’ve exhausted the obvious speed enhancements like using more efficient algorithms, vectorizing all operations, and running on faster hardware, your last hope may be to to rewrite the slow parts in a compiled language linked to MATLAB through mex. This is no easy task though. Re-implementing a MATLAB function in C++, Fortran, or Java is tedious at best, and completley impractical at worst if the code to be rewritten calls complex MATLAB or Toolbox functions.

#### Python and Numba are an alternative to mex

In this article I show how Python and Numba can greatly improve MATLAB performance with less effort, and with greater flexibility, than using mex with C++, Fortran, or Java. You’ll need to include Python in your MATLAB workflow, but that’s no more difficult than installing compilers and configuring mex build environments.

Two examples demonstrate the performance boost Python and Numba can bring MATLAB: a simple Mandelbrot set computation, and a much more involved Navier-Stokes solver for incompressible fluid flow. In both cases, the Python augmentation makes the MATLAB application run several times faster.

## Python+Numba as an alternative to mex

The mex compiler front-end enables the creation of functions written in C, C++, Fortran, Java, and .Net that can be called directly from MATLAB. To use it effectively, you’ll need to be proficient in one of these languages and have a supported compiler. With those in place, the fastest way to begin writing a mex-compiled function is to start with existing working code such as any of the MathWorks' mex examples and modifying it to your needs.

A big challenge to translating MATLAB code is that one line of MATLAB can translate to dozens of lines in a compiled language. Standard MATLAB statements such as

y = max((A\b)*v',[],'all');

become a logistical headache in the compiled languages. Do you have access to an optimized linear algebra library? Do you know how to set up the calling arguments? What if the inputs are complex? What if A is rectangular? It is not straightforward.

Python, on the otherhand, can often match MATLAB expressions one-to-one (although the Python expressions are usually longer). The Python version of the line above is

Python:

y = np.max((np.linalg.solve(A,b).dot(v.T))

where np. is the prefix for the NumPy module.

If you’re familiar with Python and its ecosystem of numeric and scientific modules, you’ll be able to translate a MATLAB function to Python much faster than you can translate MATLAB to any compiled language.

For the most part, numerically intensive codes run at comparable speeds in MATLAB and Python. Simply translating MATLAB to Python will rarely give you a worthwhile performance boost. To see real acceleration you’ll need to additionally use a Python code compiler such as Cython, Pythran, f2py, or Numba.

Numba offers the most bang for the buck—the greatest performance for the least amount of effort—and is the focus of this post.

## What is Numba?

From the Numba project’s Overview:

Numba is a compiler for Python array and numerical functions that gives you the power to speed up your applications with high performance functions written directly in Python.

Numba generates optimized machine code from pure Python code using the LLVM compiler infrastructure. With a few simple annotations, array-oriented and math-heavy Python code can be just-in-time optimized to performance similar as C, C++ and Fortran, without having to switch languages or Python interpreters.

### Type casting Python function arguments

Three steps are needed to turn a conventional Python function into a much faster Numba-compiled function:

1. Import the jit function and all data types you plan to use from the Numba module. Also import the prange function if you plan to run for loops in parallel.
2. Precede your function definition with @jit()
3. Pass to @jit() type declarations for of each input argument and return value

The steps are illustrated below with a simple function that accepts a 2D array of 32 bit floating point numbers and a 64 bit floating point tolerance, then returns an unsigned 64 bit integer containing the count of terms in the array that are less than the tolerance. We’ll ignore the fact that this is a simple operation in both MATLAB (sum(A < tol)) and Python (np.sum(A < tol)).)

First the plain Python version:

Python:

def n_below(A, tol):
count = 0
nR, nC = A.shape
for r in range(nR):
for c in range(nC):
if A[r,c] < tol:
count += 1
return count

Now with Numba:

Python:

  1 2 3 4 5 6 7 8 9 10  from numba import jit, uint64, float32, float64 @jit(uint64(float32[:,:], float64), nopython=True) def n_below_numba(A, tol): count = 0 nR, nC = A.shape for r in range(nR): for c in range(nC): if A[r,c] < tol: count += 1 return count

Only the two highlighted lines were added; the body of the function did not change. Let’s see what this buys us:

In : A = np.random.rand(2000,3000).astype(np.float32)
In : tol = 0.5
In : %timeit n_below(A, tol)         # conventional Python
11.3 s ± 72.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In : %timeit n_below_numba(A, tol)   # Python+Numba
9.92 ms ± 251 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Time dropped from 11.3 seconds to 9.92 milliseconds, a factor of more than 1,000. While this probably says more about how slow Python 3.8 is for this problem1, the Numba version is impressive. It is in fact even a bit faster than the native NumPy version:

In : %timeit np.sum(A < tol)
10.2 ms ± 193 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

For completeness, MATLAB 2022b does this about as quickly as NumPy:

>> A = single(rand(2000,3000));
>> tol = 0.5;
>> tic; sum(A < tol); toc

Repeating the last line a few times gives a peak (that is, minimum) time of 0.011850 seconds (11.85 ms).

### Options

The @jit() decorator has a number of options that affect how Python code is compiled. All four of the options below appear on every @jit() instance in the Navier-Stokes code. Here’s an example:

@jit(float64[:,:](float64[:,:], float64),
nopython=True, fastmath=True, parallel=True, cache=True)
def DX2(a,dn):
# finite difference for Laplace of operator in two dimensions
# (second deriv x + second deriv ystaggered grid)
return (a[:-2,1:-1] + a[2:,1:-1] + a[1:-1,2:] + a[1:-1,:-2] -
4*a[1:-1,1:-1])/(dn**2)

#### nopython=True

Numba compiled functions can work either in object mode, where the code can interact with the Python interpreter, or in nopython mode where the Python interpeter is not used. nopython mode excludes the Python interpreter and results in higher performance.

#### fastmath=True

This option matches the -Ofast optimization switch used by compilers in the Gnu Compiler Collection. It allows the compiler to resequence floating point computations in a non IEEE 754 compliant manner—enabling significant speed increases at the risk of generating incorrect (!) results.

To use this option with confidence, compare solutions produced with and without it. Obviously if the results differ appreciably you shouldn’t use this option.

#### parallel=True

This option tells Numba to attempt automatic parallelization of code blocks in jit compiled functions. Parallelization may be possible even if these functions do not explicitly call the prange() to make a for loop run in parallel.

Let’s try this out on the simple n_below() function shown above:

Python:

from numba import jit, prange, uint64, float32, float64
@jit(uint64(float32[:,:], float64), nopython=True, parallel=True)
def n_below_parallel(A, tol):
count = 0
nR, nC = A.shape
for r in prange(nR):    # <-  parallel for loop
for c in range(nC):
if A[r,c] < tol:
count += 1
return count

By parallelizing the rows of A over cores I get twice the performance on my 4 core laptop:

In : %timeit n_below_parallel(A, tol)
5.96 ms ± 299 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Even more impressively, the Navier-Stokes solver runs nearly 2x more quickly with parallel=True on my 4 core laptop even though it has no parallel for loops.

#### cache=True

A program that includes Numba-enhanced Python functions start more slowly because the jit compiler needs to compile the functions before the program can begin running. The delay is barely noticable with small functions.

That’s not the case for the more complicated jit-compiled functions in the Navier-Stokes solver. These impose a 10 second start-up delay on my laptop. The cache=True option tells Numba to cache the result of its compilations for immediate on subsequent runs. The code is only recompiled if any of the @jit decorated functions change.

## Hardware, OS details

The examples were run on a 2015 Dell XPS13 laptop with 4 cores of i5-5200U CPU @ 2.2 GHz and 8 GB memory. The OS is Ubuntu 20.04. MATLAB 2022b was used but the code will run with any MATLAB version from 2020b onward.

Anaconda Python 2020.07 with Python 3.8.8 was used to permit running on older versions of MATLAB, specifically 2020b. (Yes, I’m aware Python 3.11 runs more quickly than 3.8. MATLAB does not yet support that version though.)

## Example 1: Mandelbrot set

This example comes from the High Performance Computing chapter of my book. There I implement eight versions of code that compute terms of the Mandelbrot set: MATLAB, Python, Python with multiprocessing, Python with Cython, Python with Pythran, Python with f2py, Python with Numba, and finally MATLAB calling the Python/Numba functions.

The baseline MATLAB version is much faster than the baseline Python implementation. However, employing any of the compiler-augmented tools like Cython, Pythran, f2py, or Numba turns the tables and makes the Python solutions much faster than MATLAB.

baseline MATLAB2 :

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37  % file: MB_main.m main() function [i]=nIter(c, imax) z = complex(0,0); for i = 0:imax-1 z = z^2 + c; if (real(z)^2 + imag(z)^2) > 4 break end end end function [img]=MB(Re,Im,imax) nR = size(Im,2); nC = size(Re,2); img = zeros(nR, nC, 'uint8'); % parfor i = 1:nR % gives worse performance for i = 1:nR for j = 1:nC c = complex(Re(j),Im(i)); img(i,j) = nIter(c,imax); end end end function [] = main() imax = 255; for N = [500 1000 2000 5000] tic nR = N; nC = N; Re = linspace(-0.7440, -0.7433, nC); Im = linspace( 0.1315, 0.1322, nR); img = MB(Re, Im, imax); fprintf('%5d %.3f\n',N,toc); end end 

baseline Python:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33  #!/usr/bin/env python3 # file: MB.py import numpy as np import time def nIter(c, imax): z = complex(0, 0) for i in range(imax): z = z*z + c if z.real*z.real + z.imag*z.imag > 4: break return np.uint8(i) def MB(Re, Im, imax): nR = len(Im) nC = len(Re) img = np.zeros((nR, nC), dtype=np.uint8) for i in range(nR): for j in range(nC): c = complex(Re[j], Im[i]) img[i,j] = nIter(c,imax) return img def main(): imax = 255 for N in [500,1000,2000,5000]: T_s = time.time() nR, nC = N, N Re = np.linspace(-0.7440, -0.7433, nC) Im = np.linspace( 0.1315, 0.1322, nR) img = MB(Re, Im, imax) print(N, time.time() - T_s) if __name__ == '__main__': main() 

The Python with Numba version is the same as the baseline Python version with five additional Numba-specific lines. An explanation of each highlighted line appears below the code:

Python with Numba:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37  #!/usr/bin/env python3 # file: MB_numba.py import numpy as np import time from numba import jit, prange, uint8, int64, float64, complex128 @jit(uint8(complex128,int64), nopython=True, fastmath=True) def nIter(c, imax): z = complex(0, 0) for i in range(imax): z = z*z + c if z.real*z.real + z.imag*z.imag > 4: break return i @jit(uint8[:,:](float64[:], float64[:],int64), nopython=True, fastmath=True, parallel=True) def MB(Re, Im, imax): nR = len(Im) nC = len(Re) img = np.zeros((nR, nC), dtype=np.uint8) for i in prange(nR): for j in range(nC): c = complex(Re[j], Im[i]) img[i,j] = nIter(c,imax) return img def main(): imax = 255 for N in [500, 1000, 2000, 5000]: T_s = time.time() nR, nC = N, N Re = np.linspace(-0.7440, -0.7433, nC) Im = np.linspace( 0.1315, 0.1322, nR) img = MB(Re, Im, imax) print(N, time.time() - T_s) if __name__ == '__main__': main()

Line 5 imports the necessary items from the numba module: jit() is a decorator that precedes functions we want Numba to compile. prange() is a parallel version of the Python range() function. It turns regular for loops into parallel for loops where work is automatically distributed over the cores of your computer. The remaining items from uint8 to complex128 define data types that will be employed in function signatures.

Line 6 defines the signature of the nIter() function as taking a complex scalar and 64 bit integer as inputs and returning an unsigned 8 bit integer. (The remaining keyword arguments like nopython= were explained above.)

Lines 15 and 16 define the signature of the MB() function as taking a pair of 1D double precision arrays and a scalar 64 integer and returning a 2D array of unsigned 8 bit integers.

Line 21 implements a parallel for loop; iterations for different values of i are distributed over the cores.

What have we gained with the five extra Numba lines? This table shows how our three codes perform for different image sizes; times are in seconds.

N MATLAB 2022b Python 3.8.8 Python 3.8.8 + Numba
500 0.70 8.22 0.04
1000 2.29 32.70 0.14
2000 9.04 127.84 0.61
5000 56.77 795.41 3.51

The Python+Numba performance borders on the unbelievable—but don’t take my word for it, run the three implementations yourself!

One “cheat” the Python+Numba solution enjoys is that its parallel for loop takes advantage of the four cores on my laptop. I tried MATLAB’s parfor parallel for loop construct but, inexplicably, it runs slower than a conventional sequential for loop. Evidently multiple cores are only used if you have a license for the Parallel Computing Toolbox (which I don’t).

### MATLAB + Python + Numba

If Python can run faster with Numba, MATLAB can too. All we need to do is import the MB_numba module and call its jit-compiled MB() function from MATLAB:

MATLAB:

  1 2 3 4 5 6 7 8 9 10 11 12  % file: MB_python_numba.m np = py.importlib.import_module('numpy'); MB_numba = py.importlib.import_module('MB_numba'); imax = int64(255); for N = [500 1000 2000 5000] tic nR = N; nC = N; Re = np.array(linspace(-0.7440, -0.7433, nC)); Im = np.array(linspace( 0.1315, 0.1322, nR)); img = py2mat(MB_numba.MB(Re, Im, imax)); fprintf('%5d %.3f\n',N,toc); end 

MB() from MB_numba.py is a Python function so it returns a Python result. To make the benchmark against the baseline MATLAB version fair, the program includes conversion of the NumPy img array to a MATLAB matrix (using py2mat.m) in the elapsed time. The table below repeats the MATLAB baseline times from the previous table. Numeric values in the middle and right column are elapsed time in seconds.

N MATLAB 2022b MATLAB 2022b + Python 3.8.8 + Numba
500 0.70 0.18
1000 2.29 0.17
2000 9.04 0.64
5000 56.77 3.85

### Visualizing the result

Call MATLAB’s imshow() or imagesc() functions on the img matrix if you want to see what the matrix looks like. For example,

MATLAB:

 1 2 3 4 5 6 7 8 9  % file: MB_view.m np = py.importlib.import_module('numpy'); MB_numba = py.importlib.import_module('MB_numba'); imax = int64(255); nR = 2000; nC = 2000; Re = np.array(linspace(-0.7440, -0.7433, nC)); Im = np.array(linspace( 0.1315, 0.1322, nR)); img = py2mat(MB_numba.MB(Re, Im, imax)); imagesc(img) 

Similarly, in Python:

  1 2 3 4 5 6 7 8 9 10 11 12  #!/usr/bin/env python3 # file: MB_view.py import numpy as np from MB_numba import MB import matplotlib.pyplot as plt imax = 255 nR, nC = 2000, 2000 Re = np.linspace(-0.7440, -0.7433, nC) Im = np.linspace( 0.1315, 0.1322, nR) img = MB(Re, Im, imax) plt.imshow(img) plt.show() 

## Example 2: incompressible fluid flow

The Mandelbrot example is useful for demonstrating how to use Numba and for giving insight to the performance boosts it can give. But useful as a computational pattern resembling real work? Not so much.

The second example represents computations as might appear in a realistic scientific application. It demonstrates MATLAB+Python+Numba with a two dimensional Navier-Stokes solver for incompressible fluid flow. I chose an implementation by Jamie Johns which shows MATLAB at its peak performance using fully vectorized code. While this code only computes flow in 2D, the sequence of calculations and memory access patterns are representative of scientific and engineering applications. Jamie Johns' YouTube channel has interesting videos produced with his code.

### Boundary conditions

A clever aspect of Jamie Johns' code is that boundary conditions are defined graphically. You create an image, for example a PNG, where colors define properties at each grid point: black = boundary, red= outflow (source), and green = inflow (sink). The image borders, if they aren’t already black, also define boundaries.

This PNG image, which can be found in the original code’s Github repository as scenario_sphere2.png, represents flow moving from left to right around a circle, a canonical fluid dynamics problem I studied as an undergrad:

Of course, if boundary conditions are supplied as images, they can be arbitrarily complex. I recalled an amusing commercial by Maxell, “Blown Away Guy”, from my youth. With a bit of photo editing I converted

to the following boundary conditions image:

The domain dimensions are 15 meters x 3.22 meters and the CFD mesh has 400 x 86 grid points, giving a relatively coarse resolution of 3.7 cm between points. My memory-limited (8 GB) laptop can’t handle a finer mesh.

### Initial conditions

• time step: 0.003 seconds
• number of iterations: 12,900
• air density = 1 kg/m$^3$
• dynamic viscosity = 0.001 kg/(m s)
• air speed leaving the speaker: 0.45 m/s

The low air speed is admittedly contrived and certainly lower than the commercial’s video suggests. (my guess is for that is at least 2 m/s). I found the solver fails to converge, then exits with an error, if the flow speed is too high. Same thing for number of iterations—too many and the solver diverges. Trial and error brought me to 0.45 m/s and 12,900 iterations as values that produce the longest animation.

I don’t understand the reason of the failure (mesh too coarse?) and haven’t been motivated to figure out what’s going on. This is, after all, an exercise in performance tweaking rather than studying turbulence or computing coefficients of drag.

### Running the code

My modifications to Jamie Johns' MATLAB code and the translation to Python + Numba can be found at https://github.com/AlDanial/matlab_with_python/, under performance/fluid_flow (MIT license). The code there can be run three ways: entirely in MATLAB, entirely in Python, or with MATLAB calling Python.

One thing to note is that the computational codes are purely batch programs that write .mat and .npy data files. No graphics appear. The flow can be visualized with a separate post-processing step after the flow speeds are saved as files.

#### MATLAB (2020b through 2022b)

Start MATLAB and enter the name of the main program, Main, at the prompt. I had to start MATLAB with -nojvm to suppress the GUI (and close all other applications, especially web browsers) to give the program enough memory.

>> Main

#### Python + Numba

On Linux and macOS, just type ./py_Main.py in a console.

On Windows, open a Python-configured terminal (such as the Anaconda console) and enter python py_Main.py.

#### MATLAB + Python + Numba

Edit Main.m and change the value of Run_Python_Solver to true. Save it, then run Main and the MATLAB prompt.

#### Operation

Regardless of how you run the code (MATLAB / Python / MATLAB + Python), the steps of operation are the same: the application creates a subdirectory called NS_velocity then populates it with .npy files (Python and MATLAB+Python) or .mat files (pure MATLAB). These files contain the x and y components of the flow speed at each grid point for a batch of iterations.

The first time the Python or MATLAB+Python program starts, expect to see delay of about 30 seconds while Numba compiles the @jit decorated functions in navier_stokes_functions.py. This is a one-time cost though; subsequent runs will start much more quickly.

#### Performance

MATLAB runs more than three times faster using the Python+Numba Navier-Stokes solver. More surprisingly, Python without Numba also runs faster than MATLAB. I updated the performance table on 2022-10-30 following a LinkedIn post by Artem Lenksy asking how the recently released 3.11 version fares on this problem. Numba is not yet available for 3.11 so I reran without @jit decorators—and threw in a 3.8.8 run without Numba as well:

Solver Elapsed seconds
MATLAB 2022b 458.35
Python 3.11.0 238.95
Python 3.8.8 237.46
MATLAB 2022b + Python 3.8.8 + Numba 128.89
Python 3.8.8 + Numba 126.99

Two surprises here: plain Python + NumPy is faster than MATLAB, and Python 3.11.0 is no faster than 3.8.8 for this compute-bound problem.

### Visualization

The program animate_navier_stokes.py generates an animation of flow computed earlier with py_Main.py or Main.m. The most direct way to use it is by giving it the directory (NS_velocity by default) containing flow velocity files and whether you want to see *.mat files generated by MATLAB (use --matlab) or *.npy files generated by either Python or the MATLAB+Python combination (use --python). (Note: the program loads many Python modules and takes a long time to start the first time.)

./animate_navier_stokes.py --python NS_velocity

With this command the program first loads all *.npy files in the NS_velocity directory, then writes a large merged file (merged_frames.npy, 1.7 GB) containing the entire solution in one file. You can later load the merged file much more quickly than reading the individual *.npy files.

The first thing you’ll notice is the animation proceeds slowly. To speed things up, skip a few frames. The next command reads from the merged frame file and only shows every 50th frame:

./animate_navier_stokes.py --npy merged_frames.npy --incr 50

Use the --png-dir switch to write individual frames to PNG files in the given directory. For example

./animate_navier_stokes.py --npy merged_frames.npy --png-dir PNG --incr 400

will create the directory PNG then populate it with 33 files with names like PNG/frame_08000.png. I created the video at the top of this post by saving individual frames, overlaying the Maxell image on each, then generating an MP4 file with ffmpeg.

2. Mike Croucher at the MathWorks provided code tweaks to improve the performance of this implementation. ↩︎

## Symbolic Math in MATLAB with SymPy

Part 10 of the Python Is The Ultimate MATLAB Toolbox series.

## Introduction

SymPy is a Python-based computer algebra system with capabilities similar to Maple, Maxima, MATLAB’s Symbolic Math Toolbox, SageMath, and to a lesser extent, Mathematica. The most natural way to work with mathematical expressions symbolically in MATLAB is with the Symbolic Math Toolbox. If you don’t have access to this toolbox and you’re willing to include Python in your MATLAB workflow, SymPy may be an adequate alternative. SymPy is included with the recommended Anaconda Python distribution.

SymPy’s capabilities are vast—the PDF version of its recent v1.11.1 documentation is a staggering 3,184 pages—and this post merely covers the basics: defining symbolic variables, integrating, taking derivatives, solving simultaneous equations, summing series, generating LaTeX markup, and generating source code in MATLAB and other languages.

To give you a sense of SymPy’s depth, here are a few of its capabilities that aren’t covered in this article: linear, time-invariant (LTI) controls, combinatorics, quaternions, sets, tensor products and contraction, Pauli algebra, hydrogen wavefunctions, quantum harmonic oscillators, gamma matrices, Gaussian optics, and continuum mechanics.

## Defining symbolic variables and functions

The first steps to using SymPy are importing its various modules and defining symbolic variables and functions. Symbolic variables can be defined three ways: 1) imported from sympy.abc, 2) created with a call to sympy.symbols(), or 3) created with a call to sympy.Symbol(). The module sympy.abc has a collection of predefined symbolic variables that include all lower- and uppercase letters and words defining Greek letters such as omega. Multi-character variables can be defined with the sympy.symbols() or sympy.Symbol() functions. The latter allows one to add properties to a variable indicating, for example, that it is an integer, or its value is always positive, or that it is a constant, or must be real. The code below defines symbolic variables x, y, r, phi, u1, v1, and Length:

Python:

from sympy import symbols, Symbol
from sympy.abc import x, y, r, phi                                 # method 1
u1, v1 = symbols('u1 v1')                                          # method 2
Length = Symbol('Length', positive=True, constant=True, real=True) # method 3

The constants $\pi$, $i$, $e$, and $\infty$ must be imported to be used symbolically. $i$ and $e$ exist as uppercase I and E, and $\infty$ is oo:

Python:

In : from sympy import Float, pi, I, E, oo
In : pi, I, E, oo
Out: (pi, I, E, oo)
In : Float(pi), Float(E), Float(oo)
Out: (3.14159265358979, 2.71828182845905, oo)

Float(I) doesn’t appear above because yields a TypeError.

Unlike $\pi$ and $e$, infinity is not expanded to its IEEE 754 floating point representation even though NumPy has an explicit term for it: np.inf.

#### Symbolic functions

In addition to symbolic variables, functions that operate on symbolic variables must also be explicitly requested from SymPy. Symbolic-capable functions include forward and inverse trignometric, forward and inverse hyperbolic, logarithmic, and many others.

For example if your work involves symbolic computations with sine, cosine, and natural logarithm functions, load the symbolic versions of these functions from SymPy with:

Python:

from sympy import sin, cos, ln

Examples of equivalent MATLAB imports appear in the integration and derivation sections below.

## Solving equations

We’ll use the symbols defined above in some equations and have SymPy solve them. Here’s an easy one to start: given the equation for a circle,

$$x^2 + y^2 = r^2$$
express the circle in terms of $x$. There are two ways to set up the problem for SymPy. The first involves recasting the equation to equal zero, as in
$$x^2 + y^2 - r^2 = 0$$

then supplying the left hand side to solve(), along with the variable we want to solve for, namely x:

Python:

In : from sympy.abc import x, y, r
In : from sympy import solve
In : solve(x**2 + y**2 - r**2, x)
Out: [-sqrt((r - y)*(r + y)), sqrt((r - y)*(r + y))]
While the answer is correct, it doesn’t resemble the expected form of

$$x = \pm \sqrt{ r^2 - y^2 }$$
We can get the expected form by explicitly expanding the two parts of the solution:

Python:

In : from sympy.abc import x, y, r
In : from sympy import solve, expand
In : sol_minus, sol_plus = solve(x**2 + y**2 - r**2, x)
In : expand(sol_minus), expand(sol_plus)
Out: (-sqrt(r**2 - y**2), sqrt(r**2 - y**2))

… which raises the question of what does it mean for an equation to be “simplified” or “expanded”? SymPy’s opinion in this case is the opposite of my understanding of these terms.

The second way to solve an equation is to create an explicit equation object with the Eq() function then pass the equation to solve(). Eq() takes two input arguments which are the left and right sides of the equation:

Python:

In : from sympy.abc import x, y, r
In : from sympy import Eq
In : circle_eqn = Eq(x**2 + y**2, r**2) # x^2 + y^2 = r^2
In : solve(circle_eqn, x)
Out: [-sqrt((r - y)*(r + y)), sqrt((r - y)*(r + y))]

#### Intersection of a circle and line in Python

So far we haven’t really solved anything, we’ve just reordered terms of an equation. Next we’ll add the equation of a line to the mix and have SymPy figure out the points of intersection. The equations are

\begin{align*} x^2 + y^2 &= r^2 \\ y &= m x + b \end{align*}

This is a nonlinear system of equations so we’ll need SymPy’s nonlinsolve() function:

Python:

In : from sympy import Eq, nonlinsolve
In : from sympy.abc import x, y, r, m, b
In : circle_eqn = Eq(x**2 + y**2, r**2) # x^2 + y^2 = r^2
In : line_eqn = Eq(y, m*x + b)          # y = m x + b
In : nonlinsolve([circle_eqn, line_eqn], [x, y])
Out: {((-b + b/(m**2 + 1) - m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1))/m,
b/(m**2 + 1) - m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1)),
((-b + b/(m**2 + 1) + m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1))/m,
b/(m**2 + 1) + m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1))}

SymPy returns two pairs of $(x,y)$ values representing the points of intersection—if these exist, of course. The actual values of $b$, $m$, and $r$ will determine whether the solution has real or imaginary values.

In Solve Systems of Nonlinear Equations with scipy.optimize.root() I show how SciPy can find numeric solutions to systems of nonlinear equations. How does SymPy fare on the intersection of four spheres problem described there? Unfortunately, not well. The program below assembles the problem but nonlinearsolve() doesn’t return after running for more than an hour.

Python:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16  #!/usr/bin/env python # file: four_spheres.py # Don't try this at home; the nonlinsolve() step hangs. from sympy import Eq, nonlinsolve, symbols from sympy.abc import x, y, z, c, e A1, A2, A3, A4 = symbols('A1 A2 A3 A4') B1, B2, B3, B4 = symbols('B1 B2 B3 B4') C1, C2, C3, C4 = symbols('C1 C2 C3 C4') Dt1, Dt2, Dt3, Dt4 = symbols('Dt1 Dt2 Dt3 Dt4') s1 = Eq( (x-A1)**2 + (y-B1)**2 + (z-C1)**2 , (c*(Dt1 + e) )**2 ) s2 = Eq( (x-A2)**2 + (y-B2)**2 + (z-C2)**2 , (c*(Dt2 + e) )**2 ) s3 = Eq( (x-A3)**2 + (y-B3)**2 + (z-C3)**2 , (c*(Dt3 + e) )**2 ) s4 = Eq( (x-A4)**2 + (y-B4)**2 + (z-C4)**2 , (c*(Dt4 + e) )**2 ) soln = nonlinsolve([s1, s2, s3, s4], [x, y, z, e]) # hangs 

#### Intersection of a circle and line in MATLAB

The MATLAB version of the circle/line intersection problem looks like this:

  1 2 3 4 5 6 7 8 9 10 11 12  sympy = py.importlib.import_module('sympy'); abc = py.importlib.import_module('sympy.abc'); x = abc.x; y = abc.y; r = abc.r; m = abc.m; b = abc.b; Two = int64(2); circle_eqn = sympy.Eq(x^Two + y^Two, r^Two); % x^2 + y^2 = r^2 line_eqn = sympy.Eq(y, m*x + b); % y = m*x + b soln = sympy.nonlinsolve(py.list({circle_eqn, line_eqn}), ... py.list({x, y})) 

Note: In MATLAB, exponents for squares, cubes, and so on should be explicitly cast to integers in the SymPy expression. If this is not done, the notation “^2” given to SymPy will use the default MATLAB type of double which greatly complicates — and often defeats — the SymPy evaluator.

The solution in MATLAB appears with the full complement of attributes in addition to just the equations we’re after. This is a truncated portion of the output:

MATLAB:

>> soln

Python FiniteSet with properties:

args: [1x2 py.tuple]
assumptions0: [1x1 py.dict]
boundary: [1x1 py.sympy.sets.sets.FiniteSet]
:
is_transcendental: [1x1 py.NoneType]
is_zero: [1x1 py.NoneType]
measure: [1x1 py.int]

{((-b + b/(m**2 + 1) - m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1))/m,
b/(m**2 + 1) - m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1)),
((-b + b/(m**2 + 1) + m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1))/m,
b/(m**2 + 1) + m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1))}

The string containing just the two $(x,y)$ pairs is in soln.char:

MATLAB:

>> soln.char

'{((-b + b/(m**2 + 1) - m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1))/m,
b/(m**2 + 1) - m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1)),
((-b + b/(m**2 + 1) + m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1))/m,
b/(m**2 + 1) + m*sqrt(-b**2 + m**2*r**2 + r**2)/(m**2 + 1))}'

## Solution as LaTeX

The SymPy text results we’ve seen so far look awkward. If you want to publish the results they’ll look much more elegant rendered with LaTeX. Simple expressions are easy to write manually but SymPy’s output can get lengthy so its LaTeX generator comes in handy:

Python:

In : from sympy.printing.latex import print_latex
In : from sympy import Eq, nonlinsolve
In : from sympy.abc import x, y, r, m, b
In : circle_eqn = Eq(x**2 + y**2, r**2) # x^2 + y^2 = r^2
In : line_eqn = Eq(y, m*x + b)          # y = m x + b
In : soln = nonlinsolve([circle_eqn, line_eqn], [x, y])
In : print_latex(soln)

The last line produces this output

\left\{\left( \frac{- b + \frac{b}{m^{2} + 1} - \frac{m \sqrt{- b^{2} + m^{2} r^{2} + r^{2}}}{m^{2} + 1}}{m}, \ \frac{b}{m^{2} + 1} - \frac{m \sqrt{- b^{2} + m^{2} r^{2} + r^{2}}}{m^{2} + 1}\right), \left( \frac{- b + \frac{b}{m^{2} + 1} + \frac{m \sqrt{- b^{2} + m^{2} r^{2} + r^{2}}} {m^{2} + 1 }}{m}, \ \frac{b}{m^{2} + 1} + \frac{m \sqrt{- b^{2} + m^{2} r^{2} + r^{2}}}{m^{2} + 1}\right)\right\}

which renders as

$$\left\{\left( \frac{- b + \frac{b}{m^{2} + 1} - \frac{m \sqrt{- b^{2} + m^{2} r^{2} + r^{2}}}{m^{2} + 1}}{m}, \ \frac{b}{m^{2} + 1} - \frac{m \sqrt{- b^{2} + m^{2} r^{2} + r^{2}}}{m^{2} + 1}\right), \\ \left( \frac{- b + \frac{b}{m^{2} + 1} + \frac{m \sqrt{- b^{2} + m^{2} r^{2} + r^{2}}}{m^{2} + 1}}{m}, \ \frac{b}{m^{2} + 1} + \frac{m \sqrt{- b^{2} + m^{2} r^{2} + r^{2}}}{m^{2} + 1}\right)\right\}$$

## Solution as MATLAB / Python / C / Fortran / etc source code

In addition to LaTeX, SymPy can express solutions as computer source code in a dozen languages. The sympy.printing module has these emitters:

Language sympy.printing function
C ccode()
C++ cxxcode()
Fortran fcode()
GLSL glsl_code()
JavaScript jscode()
Julia julia_code()
Maple maple_code()
Mathematica mathematica_code()
MATLAB octave_code()
Python pycode()
R rcode()
Rust rust_code()

The code blocks below repeat the circle/line intersection this time emitting MATLAB source code instead of LaTeX. In MATLAB, the call to octave_code() returns a Python string so this is converted to a MATLAB string before printing.

Python:

 1 2 3 4 5 6 7 8 9  #!/usr/bin/env python from sympy import Eq, nonlinsolve from sympy.abc import x, y, r, m, b from sympy.printing import octave_code circle_eqn = Eq(x**2 + y**2, r**2) # x^2 + y^2 = r^2 line_eqn = Eq(y, m*x + b) # y = m x + b soln = nonlinsolve([circle_eqn, line_eqn], [x, y]) print(soln) print(octave_code(soln)) 

  1 2 3 4 5 6 7 8 9 10 11 12 13 14  sympy = py.importlib.import_module('sympy'); abc = py.importlib.import_module('sympy.abc'); sym_code = py.importlib.import_module('sympy.printing'); x = abc.x; y = abc.y; r = abc.r; m = abc.m; b = abc.b; Two = int64(2); circle_eqn = sympy.Eq(x^Two + y^Two, r^Two); % x^2 + y^2 = r^2 line_eqn = sympy.Eq(y, m*x + b); % y = m*x + b soln = sympy.nonlinsolve(py.list({circle_eqn, line_eqn}), ... py.list({x, y})); fprintf('%s\n', string(sym_code.octave_code(soln)) ) 

Both versions produce this output:

    "{{(-b + b./(m.^2 + 1) - m.*sqrt(-b.^2 + m.^2.*r.^2 + r.^2)./(m.^2 + 1))./m,
b./(m.^2 + 1) - m.*sqrt(-b.^2 + m.^2.*r.^2 + r.^2)./(m.^2 + 1)},
{(-b + b./(m.^2 + 1) + m.*sqrt(-b.^2 + m.^2.*r.^2 + r.^2)./(m.^2 + 1))./m,
b./(m.^2 + 1) + m.*sqrt(-b.^2 + m.^2.*r.^2 + r.^2)./(m.^2 + 1)}}"

Replacing octave_code() with jscode() produces much lengthier JavaScript code:

{((-b + b/(Math.pow(m, 2) + 1) - m*Math.sqrt(-Math.pow(b, 2) +
Math.pow(m, 2)*Math.pow(r, 2) + Math.pow(r, 2))/(Math.pow(m, 2) + 1))/m,
b/(Math.pow(m, 2) + 1) - m*Math.sqrt(-Math.pow(b, 2) +
Math.pow(m, 2)*Math.pow(r, 2) + Math.pow(r, 2))/(Math.pow(m, 2) + 1)),
((-b + b/(Math.pow(m, 2) + 1) + m*Math.sqrt(-Math.pow(b, 2) +
Math.pow(m, 2)*Math.pow(r, 2) + Math.pow(r, 2))/(Math.pow(m, 2) + 1))/m,
b/(Math.pow(m, 2) + 1) + m*Math.sqrt(-Math.pow(b, 2) +
Math.pow(m, 2)*Math.pow(r, 2) + Math.pow(r, 2))/(Math.pow(m, 2) + 1))}


## Definite and indefinite integrals

Integration is done with SymPy’s integrate() function. Here’s a tricky one:

$$\int \cos(\pi n x) \left( 1 + \sin^2 \left(\frac{3 \pi n x}{2} \right) \right) dx$$

In addition to the integrate() function we’ll also need to import symbolic versions of the sine and cosine functions. Constraining $n$ and $x$ to be real, and $n$ to be positive yields a cleaner result.

The Python and MATLAB solutions are

Python:

 1 2 3 4 5 6 7 8 9  #!/usr/bin/env python from sympy import pi, sin, cos, integrate, Symbol from sympy.printing.latex import print_latex from sympy.printing import octave_code n = Symbol('n', real=True, positive=True, integer=True) x = Symbol('x', real=True) soln = integrate(cos(pi*n*x)*(1 + sin(3*pi*n*x/2)**2), (x)) at_x_n = soln.evalf(subs={x : 1.5, n : 1}) print(f'solution at x=1.5, n=1 is {at_x_n}') 

MATLAB:

  1 2 3 4 5 6 7 8 9 10 11 12  sympy = py.importlib.import_module('sympy'); printing = py.importlib.import_module('sympy.printing'); n = sympy.Symbol('n', pyargs('real',py.True, ... 'positive',py.True,'integer',py.True)); x = sympy.Symbol('x', pyargs('real',py.True)); Pi = sympy.pi; % uppercase to be distinct from matlab's pi Sin = sympy.sin; % uppercase to be distinct from matlab's sin() Cos = sympy.cos; % uppercase to be distinct from matlab's cos() soln = sympy.integrate(Cos(Pi*n*x)*(1 + Sin(3*Pi*n*x/2)^int64(2)), ... py.tuple({x})); fprintf('%s\n', soln.char) printing.print_latex(soln)

Note: In the MATLAB solution, SymPy versions of $\pi$ and sine and cosine functions are imported with a leading uppercase character to make them distinct from MATLAB’s pi, sin(), and cos(). If you import pi, sin, and cos from SymPy using the identical names they will be SymPy objects that cannot be used in conventional MATLAB expressions.

The text version of the solution, minus the obligatory constant, from Python is

7*sin(pi*n*x)*sin(3*pi*n*x/2)**2/(16*pi*n) +
9*sin(pi*n*x)*cos(3*pi*n*x/2)**2/(16*pi*n) + sin(pi*n*x)/(pi*n) -
3*sin(3*pi*n*x/2)*cos(pi*n*x)*cos(3*pi*n*x/2)/(8*pi*n)

which renders like this in LaTeX:

$$\frac{7 \sin{\left(\pi n x \right)} \sin^{2}{\left(\frac{3 \pi n x}{2} \right)}}{16 \pi n} + \frac{9 \sin{\left(\pi n x \right)} \cos^{2}{\left(\frac{3 \pi n x}{2} \right)}}{16 \pi n} + \frac{\sin{\left(\pi n x \right)}}{\pi n} - \frac{3 \sin{\left(\frac{3 \pi n x}{2} \right)} \cos{\left(\pi n x \right)} \cos{\left(\frac{3 \pi n x}{2} \right)}}{8 \pi n}$$

The MATLAB result looks quite different

0.4375*sin(pi*n*x)*sin(3*pi*n*x/2)**2/(pi*n) +
0.5625*sin(pi*n*x)*cos(3*pi*n*x/2)**2/(pi*n) + 1.0*sin(pi*n*x)/(pi*n) -
0.375*sin(3*pi*n*x/2)*cos(pi*n*x)*cos(3*pi*n*x/2)/(pi*n)
$$\frac{0.4375 \sin{\left(\pi n x \right)} \sin^{2}{\left(\frac{3 \pi n x}{2} \right)}}{\pi n} + \frac{0.5625 \sin{\left(\pi n x \right)} \cos^{2}{\left(\frac{3 \pi n x}{2} \right)}}{\pi n} + \frac{1.0 \sin{\left(\pi n x \right)}}{\pi n} - \\ \frac{0.375 \sin{\left(\frac{3 \pi n x}{2} \right)} \cos{\left(\pi n x \right)} \cos{\left(\frac{3 \pi n x}{2} \right)}}{\pi n}$$

but, as shown with numeric evaluation, both yield the same results.

#### Numeric evaluation of symbolic results

One way to convince ourselves that two starkly different equations are equivalent is to substitute numeric values in for the variables. SymPy equations have a .evalf() method to facilitate this substitution. The method accepts a dictionary of substitutions and returns a numeric value. The last two lines of the Python code above,

Python:

at_x_n = soln.evalf(subs={x : 1.5, n : 1})
print(f'solution at x=1.5, n=1 is {at_x_n}')

print

solution at x=1.5, n=1 is -0.477464829275686

I’ve not been able to replicate .evalf()'s subs= expression in MATLAB, however. Instead, to compare the results of the two integrals, I generated MATLAB source code for each and substituted arrays of values and confirmed both Python and MATLAB solutions agree.

#### Definite integrals

Definite integrals are defined by supplying lower and upper bounds after the variable of integration. The value of the same integral shown above going from $x = 0$ to $x = \frac{1}{\pi}$ for $n = 1$, in other words

$$\int_0^{\frac{1}{\pi}} \cos(\pi x) \left( 1 + \sin^2 \left(\frac{3 \pi x}{2} \right) \right) dx$$

is solved like this in Python and MATLAB:

Python:

 1 2 3 4 5 6 7  #!/usr/bin/env python from sympy import pi, sin, cos, integrate, Symbol, simplify from sympy.printing.latex import print_latex from sympy.printing import octave_code x = Symbol('x', real=True) soln = integrate(cos(pi*x)*(1 + sin(3*pi*x/2)**2), (x, 0, 1/pi)) print_latex(simplify(soln)) 

MATLAB:

  1 2 3 4 5 6 7 8 9 10 11  sympy = py.importlib.import_module('sympy'); printing = py.importlib.import_module('sympy.printing'); x = sympy.Symbol('x', pyargs('real',py.True)); Pi = sympy.pi; % uppercase to be distinct from matlab's pi Sin = sympy.sin; % uppercase to be distinct from matlab's sin() Cos = sympy.cos; % uppercase to be distinct from matlab's cos() soln = sympy.integrate(Cos(Pi*x)*(1 + Sin(3*Pi*x/2)^int64(2)), ... py.tuple({x, 0, 1/Pi})); printing.print_latex(soln) fprintf('%s\n', soln.evalf().char) 

Once again, the Python and MATLAB symbolic solutions differ but both evaluate to the same number:

Python:

$$\frac{- 2 \sin{\left(2 \right)} - \sin{\left(4 \right)} + 24 \sin{\left(1 \right)}}{16 \pi} = 0.380649112305801$$

MATLAB:

$$\frac{1.19584445481538}{\pi} = 0.380649112305801$$

In this case .evalf() works in MATLAB because there are no variables to substitute.

## Derivatives

Derivatives are computed with the diff() function. If we start with a function

$$y = \sin^2(\pi x) e^{- i x}$$

we can get $\frac{dy}{dx}$ with

Python:

 1 2 3 4 5 6  #!/usr/bin/env python from sympy import I, pi, sin, exp, diff from sympy.abc import x, y from sympy.printing.latex import print_latex soln = diff(sin(pi*x)**2 * exp(-I*x), x) print_latex(soln) 

MATLAB:

  1 2 3 4 5 6 7 8 9 10 11 12 13  sympy = py.importlib.import_module('sympy'); abc = py.importlib.import_module('sympy.abc'); printing = py.importlib.import_module('sympy.printing'); x = abc.x; y = abc.y; I = sympy.I; Pi = sympy.pi; % uppercase to be distinct from matlab's pi Sin = sympy.sin; % uppercase to be distinct from matlab's sin() Exp = sympy.exp; % uppercase to be distinct from matlab's exp() Two = int64(2); soln = sympy.diff(Sin(Pi*x)^Two * Exp(-I*x), x); fprintf('%s\n', soln.char) printing.print_latex(soln)

both of which give this LaTeX representation of the solution:

$$\frac{dy}{dx} = - i e^{- i x} \sin^{2}{\left(\pi x \right)} + 2 \pi e^{- i x} \sin{\left(\pi x \right)} \cos{\left(\pi x \right)}$$

#### Partial derivatives

diff() can also return partial derivatives. Say you have a complicated surface such as

$$z(x,y) = 4 x^4 - 17 x^3 y + 3 x^2 y^2 - y^3 - y^4$$

and you want to compute the matrix of second order derivatives

$$\begin{bmatrix} \frac{\partial^2 z}{\partial^2 x} & \frac{\partial^2 z}{\partial x \partial y} \\ \frac{\partial^2 z}{\partial y \partial x} & \frac{\partial^2 z}{\partial^2 y} \end{bmatrix}$$

you could do it with

Python:

  1 2 3 4 5 6 7 8 9 10 11 12 13  #!/usr/bin/env python from sympy import diff from sympy.abc import x, y, z from sympy.printing.latex import print_latex z = 4*x**4 - 17*x**3*y + 3*x**2*y**2 - y**3 - y**4 dzz_dxx = diff(z, x, x) dzz_dxy = diff(z, x, y) dzz_dyx = diff(z, y, x) dzz_dyy = diff(z, y, y) print_latex(dzz_dxx) print_latex(dzz_dxy) print_latex(dzz_dyx) print_latex(dzz_dyy) 

MATLAB:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18  sympy = py.importlib.import_module('sympy'); abc = py.importlib.import_module('sympy.abc'); printing = py.importlib.import_module('sympy.printing'); x = abc.x; y = abc.y; z = abc.z; E2 = int64(2); E3 = int64(3); E4 = int64(4); z = 4*x^E4 - 17*x^E3*y + 3*x^E2*y^E2 - y^E3 - y^E4; dzz_dxx = diff(z, x, x); dzz_dxy = diff(z, x, y); dzz_dyx = diff(z, y, x); dzz_dyy = diff(z, y, y); printing.print_latex(dzz_dxx) printing.print_latex(dzz_dxy) printing.print_latex(dzz_dyx) printing.print_latex(dzz_dyy)

which return different arrangements of the same result:

Python:

$$\begin{bmatrix} 6 \cdot \left(8 x^{2} - 17 x y + y^{2}\right) & 3 x \left(- 17 x + 4 y\right) \\ 3 x \left(- 17 x + 4 y\right) & 6 \left(x^{2} - 2 y^{2} - y\right) \end{bmatrix}$$

MATLAB:

$$\begin{bmatrix} 48.0 x^{2} - 102.0 x y + 6.0 y^{2} & x \left(- 51.0 x + 12.0 y\right) \\ x \left(- 51.0 x + 12.0 y\right) & 6.0 x^{2} - 12 y^{2} - 6 y \end{bmatrix}$$

## Summing series

SymPy has support for Fourier series, power series, series summation, and series expansion. This example shows how the closed-form summation

$$\displaystyle\sum_{k=1}^{N+1} \frac{e^{ik\pi}}{N^2}$$

can be found:

Python:

 1 2 3 4 5 6 7  #!/usr/bin/env python from sympy import I, Sum, exp, pi from sympy.abc import k, N from sympy.printing.latex import print_latex s = Sum( exp(I*k*pi)/N**2, (k, 1, N+1)) soln = s.doit() print_latex(soln) 

MATLAB:

  1 2 3 4 5 6 7 8 9 10 11 12 13  sympy = py.importlib.import_module('sympy'); abc = py.importlib.import_module('sympy.abc'); printing = py.importlib.import_module('sympy.printing'); k = abc.k; N = abc.N; I = sympy.I; Pi = sympy.pi; % uppercase to be distinct from matlab's pi Exp = sympy.exp; % uppercase to be distinct from matlab's exp() Sum = sympy.Sum; Two = int64(2); s = Sum( Exp(I*k*Pi)/(N^Two), py.tuple({k, 1, N+1})); soln = s.doit(); printing.print_latex(soln) 

$$\frac{- \frac{\left(-1\right)^{N + 2}}{2} - \frac{1}{2}}{N^{2}}$$

## Laplace transforms

The laplace_transform() function in SymPy resembles the laplace() function in MATLAB’s Symbolic Toolbox. Here we’ll use it to compute the transform of

$$f(t) = e^{-a t}\cos{\omega t}$$

Python:

  1 2 3 4 5 6 7 8 9 10  #!/usr/bin/env python from sympy import exp, cos, laplace_transform, symbols from sympy.printing.latex import print_latex s = symbols('s') t = symbols('t') a = symbols('a', real=True, positive=True) omega = symbols('omega', real=True) f = exp(-a*t)*cos(omega*t) L = laplace_transform(f, t, s) print_latex(L[0]) 

MATLAB:

 1 2 3 4 5 6 7 8 9  sympy = py.importlib.import_module('sympy'); printing = py.importlib.import_module('sympy.printing.latex'); s = sympy.symbols('s'); t = sympy.symbols('t'); a = sympy.symbols('a', pyargs('real','True', 'positive','True')); omega = sympy.symbols('omega', pyargs('real','True')); f = sympy.exp(-a*t)*sympy.cos(omega*t); L = sympy.laplace_transform(f, t, s); % a tuple, solution in 1st position printing.print_latex(L{1}) 

which produces LaTeX that renders as

$$\frac{a + s}{\omega^{2} + \left(a + s\right)^{2}}$$