Rust vs Julia in scientific computing

Tags: #rust,#julia

Reading time: ~27min


Although the scientific domain often requires the highest performance out of programming languages, people in this domain have often preferred less efficient dynamic languages like Python for flexibility and ease of use. At some point, a project grows out of that efficiency compromise and the software has to be rewritten in a statically-typed language like C/C++. This means that people in the scientific domain had to switch between two languages and often rewrite the logic that they already encoded in the first one. This is a problem which is referred to as the two-language problem. As an attempt to solve this problem, Julia was developed as a dynamic language that feels like Python but has performance comparable to C/C++.

Rust is a statically-typed language that indirectly addresses the problem by improving the experience with statically typed languages instead of accelerating the runtime of dynamic ones. Its strong type system and friendly compiler empower developers to write reliable and efficient software without the memory safety problems known from C/C++.

After using and even teaching both languages, I will compare them and discuss whether Julia solves the two-language problem and when you should use Rust instead.

Note

This blog post is the base for my talk at Scientific Computing in Rust 2023.

If you prefer watching a short video as a trailer, you can watch the the recorded talk. But the blog post has many more details and aspects that can't fit into 7 minutes.

⚠️ Warning ⚠️

I have to warn you, I am in love with Rust!

Therefore, this post is biased towards Rust. But I actually use Julia regularly and I even spread it at my university by teaching a vacation course about it. I just think that the promises of Julia can be misleading and there are use cases where you should use Rust instead. Continue reading to find out why.

The code examples were tested with Rust 1.71.0 and Julia 1.9.2.

Landscape mode recommended on mobile devices

Fearless concurrency*

Multithreading, as one form of concurrency, is required to speed up programs after hitting the performance limit of sequential code. Julia recognizes the importance of multithreading and makes utilizing it very easy. In fact, multithreading in Julia is a matter of adding the @threads macro in front of a for loop to split it into multiple threads and utilize the full capabilities of multi-core processors. But although Julia makes multithreading easier, it doesn't make it any safer!

Let's take a look at the following known example of incrementing a counter by multiple threads:

function unsafe_count()
    counter = 0

    Threads.@threads for _ in 1:10_000
        counter += 1
    end

    counter
end

If you are not familiar with multithreading, you would expect that the result should be 10_000. But if you run it multiple times, you will get an output similar to the following:

8609
8491
9191

The output is random because of data races.

Definition

A simplified definition of a data race is the situation when one thread modifies the value of a variable while another thread is trying to read its value or also modify it. Data races are known to lead to arbitrary incorrect results that are very hard to discover and debug, not only in Julia but also in languages like Python and C/C++.

The data race in the example happens because a thread reads the current value of counter, adds 1 to it and stores the addition result in the same variable. If two threads read the variable at the same time and then add 1, the addition result would be the same and they will both store that result which means that we loose one addition.

The following demonstrates the scenario without a data race:

counter | thread 1  | thread 2
3       | read 3    |
3       | 3 + 1 = 4 |
4       | write 4   |
4       |           | read 4
4       |           | 4 + 1 = 5
5       |           | write 5

In the case of a data race, an addition is lost:

counter | thread 1  | thread 2
3       | read 3    |
3       |           | read 3
3       | 3 + 1 = 4 | 3 + 1 = 4
4       | write 4   |
4       |           | write 4

Let's translate the Julia code into Rust:

use rayon::prelude::*;

let mut counter = 0;

(0..10_000).into_par_iter().for_each(|_| {
    counter += 1;
});

We use the Rayon library which offers easy multithreading using iterators.

Fortunately, the Rust code above doesn't compile! ❌

In Rust, either you have only one mutable reference and no immutable ones or (XOR) you have any number of immutable references but no mutable ones.

For a data race to happen, you need to have more than one mutable reference. For example, in the Julia code above, every thread has its own mutable reference to the counter variable! It is proven that Rust's type system and borrow checker make data races impossible!

Note

*: With Rust, you almost have fearless concurrency. Data races are impossible, but you still have to fear deadlocks!

This blog post is about safe Rust, which means Rust without using unsafe.

To make the Rust version compile, we need to either use a mutex or an atomic. Atomics guarantee on the hardware level that their supported operations are done atomically, which means in one step! Since atomics have better performance than a mutex, we will use AtomicU64 (unsigned integer with 64 bits):

let counter = AtomicU64::new(0);

(0..10_000).into_par_iter().for_each(|_| {
    counter.fetch_add(1, Ordering::Relaxed);
});

Note that counter isn't mutable anymore! There is no more mut after let. Since operations on atomic types guarantee to not introduce data races, they take an immutable reference &self instead of a mutable one &mut self. This allows us to use them on multiple threads (because it is allowed to have multiple immutable references).

Of course, the atomic Rust version above returns 10_000 🎉

If it compiles, it is data race free 😃

Note

Discussing Ordering::Relaxed would go beyond the scope of this blog post. You can read more about atomic memory orderings in the documentation.

The correct Julia version looks very similar to the Rust version:

function safe_count()
    counter = Threads.Atomic{UInt64}(0)

    Threads.@threads for _ in 1:10_000
        Threads.atomic_add!(counter, UInt64(1))
    end

    counter[]
end

This means that Julia does have atomics too. But it is not able to detect a possible data race to recommend using them or at least just warn us.

Julia's multithreading documentation states: "You are entirely responsible for ensuring that your program is data-race free […]"

Moore's law is almost dead, at least for single core performance. Therefore, we need a language that makes concurrency not only easy but also correct.

Project scalability

How hard is it to maintain, extend and reason about the correctness of Julia code while a project grows?

Static analysis

Highly optimized Julia code can get close to the performance of Rust because of Julia's just-in-time (JIT) compilation.

But producing optimized machine code isn't the only purpose of compilers. Julia misses a very important advantage of a statically typed language compiler: Static analysis!

Take a look at the following example in Julia:

v = [1.0]

println("OK")

println(v.pop())

println("No problem!")

Did you find any problem? Well, Julia doesn't see a problem in it until it reaches the problematic line!

If you run the code above, you will see OK printed out before you get an error because we used Rust's syntax for pop. We should have used pop!(v) in Julia.

You might think that this is fine, a simple test run will find this bug.

But what if the buggy code is behind some condition that is dependent on the program input or which is just random like in a Monte Carlo simulation? Here is a demonstration:

v = [1.0]

println("OK")

if rand(Bool)
    println(v.pop())
end

println("No problem!")

If you run this Julia code, you should have about 50% chance of just passing by the buggy block and printing out No problem!.

Well, this is a problem, a big one! Such a type bug can be prevented by a simple type system with static analysis.

Why am I talking about static analysis under the topic of scalability?

Let's say we are writing some molecular dynamics simulation. Take a look at the following example:

particles = [[1.0, 2.0], [2.0, 3.0], [42.0, 35.9]]

for particle in particles
    distance_to_origin = sqrt(particle[1]^2 + particle[2]^2)
    println("Particle's distance to origin: $distance_to_origin")
end

center_of_mass = sum(particle for particle in particles) / length(particles)
println("Center of mass: $center_of_mass")

We create some particles by storing their positions in a vector. As two placeholders for some computations, we calculate their distance to the origin and their center of mass (assuming that they all have mass 1).

Let's say that, later on, we want to take the charge of particles into account for a better accuracy of our simulation. Therefore, we create a struct called Particle storing the position and charge:

struct Particle
    position::Vector{Float64}
    charge::Float64
end

particles = [Particle([1.0, 2.0], 1.0), Particle([2.0, 3.0], -1.0), Particle([42.0, 35.9], 0.0)]

for particle in particles
    distance_to_origin = sqrt(particle[1]^2 + particle[2]^2)
    println("Particle's distance to origin: $distance_to_origin")
end

center_of_mass = sum(particle for particle in particles) / length(particles)
println("Center of mass: $center_of_mass")

We changed the content of the particles vector from positions to instances of Particle.

We don't use the introduced charge yet. We just want to check that we didn't break anything.

We run our code and get an error because we are now trying to index into the Particle struct instead of a position vector while calculating the distance to the origin.

No problem, you might think. We just forgot to adjust that line. We can just fix it and run the code again!

for particle in particles
    distance_to_origin = sqrt(particle.position[1]^2 + particle.position[2]^2)
    println("Particle's distance to origin: $distance_to_origin")
end

Are we done now? If we run it, we get another error. We missed one more line where the center of mass is calculated!

We can fix it easily like the following:

center_of_mass = sum(particle.position for particle in particles) / length(particles)
println("Center of mass: $center_of_mass")

But how long will you stay in the cycle of fixing your code and rerunning it to see if changes work as intended? Will you be sure that you didn't miss any problematic line after finally running your code without errors the first time?

Such changes that affect a relatively big part of a codebase are called refactorings.

Refactoring in Rust is a smooth compiler driven process. The compiler will throw an error for everything that you didn't adjust yet. You just work through the list of compiler errors. After solving that puzzle, your program compiles and you can be pretty confident that you haven't forgotten anything.

No errors at runtime!

Of course, you need tests to check that you don't have logical bugs. But with Rust, you can focus on your program's logic and the compiler will guarantee that everything else is correct.

One could build a linter for Julia, just like the many linters for Python. Tools like linters for dynamically typed languages won't reach the power and correctness of a static analysis built on a well typed language. It will just be like putting more cement on a fragile base just to make it a bit safer.

Error handling

In the last section, we discussed systematic bugs that can be detected by a static analysis.

What about errors that can't be directly detected at compile time?

Julia offers exceptions for dealing with such cases. How about Rust?

Option: To be or not to be, that is the question

What happens when you run the following code in Julia?

v = [1.0]
pop!(v) * pop!(v)

Well, there is only one value, therefore the second pop! will fail. But how?

It will fail with an error at runtime 💥

Can Rust prevent that? Let's take a look at the signature of pop for Vec<T> in Rust (Vec is a vector, T is a generic):

pop(&mut self) -> Option<T>

It takes a mutable reference of the vector holding values of type T and returns an Option<T>.

Option is just an enum, a very simple but powerful one!

The definition of Option in the standard library is the following:

enum Option<T> {
    None,
    Some(T)
}

This means that an Option<T> can either be None or Some with some value of type T.

Let's see how the above Julia code would look like in Rust:

let mut v = vec![1.0];

v.pop() * v.pop()

If you try to compile it, you will get a (normally colored) lovely error message like the following (Rust has the best error messages 😍):

error[E0369]: cannot multiply `Option<{float}>` by `Option<{float}>`
  --> src/lib.rs:18:13
   |
18 |     v.pop() * v.pop()
   |     ------- ^ ------- Option<{float}>
   |     |
   |     Option<{float}>

The easiest way to handle an Option is to unwrap it:

let mut v = vec![1.0];

v.pop().unwrap() * v.pop().unwrap()

The behavior of unwrapping a None is just that of Julia: It will panic at runtime.

You might think that we at least made a possible panic explicit, right?

But you shouldn't use unwrap in production code. In Rust, you should do proper pattern matching:

let mut v = vec![1.0];

let v1 = match v.pop() {
    Some(value) => value,
    None => 1.0,
};

let v2 = match v.pop() {
    Some(value) => value,
    None => 1.0,
};

v1 * v2

We use pattern matching to handle the Option. In case the Option is None, we use 1 as the neutral element of multiplication.

You might think that this is a lot of boilerplate code! You are right! But it was only a demonstration of pattern matching to understand how handling an Option works.

The code above can be reduced to the following:

let mut v = vec![1.0];

v.pop().unwrap_or(1.0) * v.pop().unwrap_or(1.0)

The implementation of unwrap_or for Option looks like the following:

fn unwrap_or(self, default: T) -> T {
    match self {
        Some(x) => x,
        None => default,
    }
}

It is just what we have done in the long version, but unwrap_or is a convenient method.

You might think that the result 1 is not what you would expect if the vector is empty. You can handle it differently. But, you see, you are just thinking about how to correctly handle cases where something doesn't work as expected! My mission is accomplished 😉

Failure is not an Option, it's a Result!

Let's say that you want to write the results of a long simulation in Julia:

open("results/energies.csv", "w") do file
    # …
end

What happens if Julia fails to open the file, for example because the directory results/ doesn't exist?

You probably guessed it: Runtime error 💥

Which would mean that you loose your results and have to rerun the simulation again after fixing the cause of the error.

You could wrap the code above in a try/catch statement and maybe dump the results into /tmp instead and tell the user about it.

But first of all, Julia doesn't force you to handle exceptions. The language itself doesn't even tell you about possible exceptions, you have to read the documentation of every function you use to find out if it could throw an exception. What if a possible exception isn't documented? What if a later package version introduces a new exception? To be really safe, you could wrap everything in a try/catch statement.

Is there something better than exceptions? Let's see the Rust version of the code above:

use std::fs::OpenOptions;

match OpenOptions::new()
    .create(true)
    .write(true)
    .open("results/energies.csv") {
        Ok(mut file) => {
            // Ready to write
        }
        Err(error) => {
            // Handle the error
        }
    };

open returns a Result. A Result<T, E> (with the generics T and E) is the second important enum in Rust:

enum Result<T, E> {
    Ok(T),
    Err(E)
}

open forces you to handle a possible IO error just like pop forces you to handle the None case.

With exceptions, you expect some value from a function and might be surprised with an exception. But with Result and Option, the type of the function's signature will tell you if an error could occur. No surprises!

Rust will not let you miss a case. It will not let you crash your program by mistake.

You can use unwrap on Result too, but then it is not a crash by mistake. You decided that you would rather want to crash.

Again, how many times do you rerun your Julia code until no error appears? How does the time needed for this cycle scale with the complexity of the project?

How confident are you about your Julia code to not crash on some point although it ran fine when you tested it with some example input?

Rust can give you the confidence that your code is correct ✔️

Interfaces

Julia is very flexible with its multiple dispatch and type hierarchy. But let's suppose that a library introduces an abstract type that you can implement concrete types for.

If a function takes that abstract type as an argument, what methods does that function expect from my concrete type to have implemented?

Since Julia doesn't have interfaces yet, you have three options to find out the required methods:

To be fair, interfaces are planned for the "potential" version 2.0 of Julia. But the fact that this wasn't a priority for 1.0 strengthens my opinion that Julia is mainly designed for interactive use cases, not for large projects.

Note

Although there is a the "potential" 2.0 milestone for Julia on Github, Julia doesn't plan a 2.0 release anytime soon.

Still, I didn't find a statement that indicates that a 2.0 release will not happen. I am not sure how Julia's ecosystem will survive a transition to 2.0, independent of when this happens.

Remember the transition from Python 2 to 3!

Rust will not have a version 2.0! It has editions that preserve backwards compatibility.

On the other hand, Rust's traits show you all required and optional methods.

The Iterator trait for example has one required method which is next. If you implement it, you get all other methods for free! If you want to, you can implement optional methods like size_hint which is useful for avoiding allocations while collecting an iterator.

There is no trying out, no searching for hidden documentation that might not even exist, no reading of source code. Rust will make sure at compile time that you implemented all required methods.

Performance

As mentioned above, highly optimized Julia code can get close to the performance of Rust. But it will often not reach the performance of Rust because it is very likely to trigger the garbage collector, something that Rust does not have.

But which language makes it easier to write the most efficient code?

Julia has what I call performance footguns.

Definition

Footgun: A feature that acts against one's own interest in an unexpected, destructive way.

For example, if you initialize an empty vector like v = [], you already degraded the performance of your code to one similar to Python's (without numpy) because your vector has the type Any. It can store any value! Therefore, Julia can't optimize this vector anymore. You either have to initialize the empty vector with a concrete type like v = Float64[] or you have to initialize it with at least one value like v = [1.0].

Julia doesn't even warn you about such performance killers! Have fun profiling, using the macro @code_warntype while interactively calling a function, etc.

If you want to write highly optimized code in Julia, you have to follow all its official performance tips. You will not even get a warning if you miss some and degrade to almost the performance of plain Python (without libraries like numpy).

If performance is not only "nice to have" for you, you should better use Rust!

Language server

Even if you are like me and use an editor instead of an IDE, you should at least use a language server.

Unfortunately, Julia's language server lacks a lot of features. Rust-Analyzer offers many more features that make you much more productive. Just go through the list of features and watch the GIFs. It is just amazing!

One example is "hovering over a variable" to see its type.

In Julia, "hovering" shows you the variable's declaration 😐️

In Rust on the other hand, "hovering" shows you the type of the variable (see the GIF). Seeing the type of a variable helps you with understanding what this variable actually is and how you could use it 🧐

In Julia, you either have to read the source code that returns that variable and try to derive its type or you have to run your program while printing typeof 🫤

Maybe you can't remember the name of a specific method. You could browse the documentation, but often it is much faster and easier to just type the variable name with a dot at the end (particles. for example) and then press tab. Any further typing works as a fuzzy search! Then you pick the method and enter the parameters while the signature is shown.

The language server in Julia can show you the signature, but often it is the wrong signature because of dynamic dispatch.

I don't even want to start talking about auto-completion and code actions in Rust. Just try it out yourself!

Many of the problems are related to the dynamic typing of Julia which is supposed to be "easier" than static typing. But with the assistance of Rust-Analyzer, I can flow between types and and be much more productive in the long term 💡

Documentation

Take a look at Julia's official API documentation of arrays and their manual.

Julia has sections in the sidebar which is nice. But that is pretty much it for the navigation 😐️

You can at least change the theme in the settings!

You can also search, but the search is relatively slow and there are no filtering options.

No wonder why some programmers are hyped about ChatGPT. Maybe because not only writing, but also reading documentation is often a pain?

Now compare it with the documentation of Vec in Rust.

rustdoc is an underrated piece of documentation perfection!

You can see all methods, implemented traits and even module navigation in the sidebar.

The search bar gives you the hint to press S to search and ? for more options. Yes, it has keyboard shortcuts 🤩

If you hover over a code example, a button on the upper right is shown to run it on Rust Playground which allows quick experimentation.

Code examples are automatically tested before publishing. There are no examples that are not in sync after an API change!

You can search, filter results, search for function parameters or return types, etc.

The documentation of all libraries is automatically published on docs.rs. You can just live there the whole day. Just learn to navigate in rustdoc and you won't need an AI to gather snippets from here and there 😉

Did I mention that you can have offline documentation if the libraries used in your project are already downloaded? Just run the command cargo doc --open. Very useful in a train for example 🚄

Oh, wait, I didn't mention the official Rust book yet? It is very well written and available online for free! If you want to learn Rust, just start with THE book 😍

Where Julia shines

OK, enough bashing against Julia. Let's see where Julia actually shines.

Interactivity

The second selling point of Julia on its website after performance is:

"Julia is dynamically typed, feels like a scripting language, and has good support for interactive use […]"

This is the power of Julia!

Although Rust has REPLs like evcxr and IRust for interactive usage, it doesn't get close to the experience of Julia's REPL because Rust is statically typed.

The Julia REPL is just fascinating! It is the best REPL I have used so far. It is much better than Python's REPLs although Python is also dynamically typed!

You can even plot in Julia's REPL using UnicodePlots.jl 📈

I often launch it to do some quick calculations or generate some plot.

What about Rust in notebooks? There is a Jupyter kernel for Rust, but the experience isn't comparable to Julia. It is not what Rust is designed for.

On the other hand, Julia is perfect for Jupyter notebooks. You want to do data analysis, make plots and present your results? Julia in a Jupyter notebook is what you are looking for.

You might ask, why not just use Python for notebooks?

"Julia vs Python" is another topic, but I will mention some points. Julia offers much better performance than Python, makes dealing with Arrays (vectors, matrices, tensors) much easier and has an ecosystem centered around scientific computing with many unique packages (more about this later).

Plus, Julia is mostly written in Julia! This makes reading and contributing code much easier. In Python, almost every package with good performance is written in C. Have fun reading that C code!

Pluto notebooks take Julia's interactivity to the next level. If you update one cell, every other dependent cell is also updated automatically! Without Julia's performance, this wouldn't be a pleasant experience.

If you are teaching scientific programming, check out Pluto notebooks. I find them better than Pluto notebooks for teaching.

In general, if you are teaching programming in a scientific context, I recommend starting with Julia! It is easier to learn and work with for most beginner's scientific use cases.

Maybe offer a course for teaching Rust for students in the scientific domain that want to write large projects like long running simulations. But Rust shouldn't be the first language to teach.

A lot of scientific computing is about linear algebra, data analysis and plotting. I think that Julia with its interactivity and performance is perfect for that.

You don't want to wait for Rust to recompile just to see how one changed attribute in your plot is applied, you want instant updates!

Although Polars with its dataframes offers better performance than DataFrames.jl (see Polars benchmarks), I tried it and you really don't want to wait for Rust to recompile to see how your dataframe changes after changing one line. Just use Julia with Revise for that case!

Scientific ecosystem

Julia has a huge ecosystem with many scientific packages.

You get multi-dimensional arrays out of the box and LinearAlgebra.jl is preinstalled!

Plotting in Julia is very mature and innovative with Plots.jl and Makie. Makie itself is a whole visualization ecosystem with hardware acceleration. Just visit its website for a showcase.

Rust has Plotters which I really appreciate, but it has a long way to go and needs to receive more love! Currently, it requires a lot of boilerplate code with many manual adjustments. Julia does currently offer a much better experience for plotting.

Julia also has awesome packages for solving differential equations, numerical integration and even newly symbolic calculation. Even dealing with units and measurement errors is a dream in Julia!

On the other side, the scientific ecosystem in Rust is still rather thin. It is growing and the new conference Scientific Computing in Rust shows how people are passionately expanding it. Of course, if a functionality you are looking for is still missing, you can contribute to the ecosystem by creating a library or extending an existing one. But this isn't always an option.

At the end, it is a chicken-and-egg problem. If people don't use a language in a specific domain because its ecosystem is not that mature in comparison to another language, then that ecosystem won't get mature at all.

If you are using Julia, chances are that you used Python before. Remember that Julia's ecosystem wasn't (and still isn't) as mature as Python's ecosystem. But people jumped in and Julia experienced a rapid growth in its ecosystem. Rust is catching up!

Which language to use?

For scientific computing, I would recommend using Rust for projects that …

Julia is a better fit for projects that …

My personal conclusion

To get back to the initial question: Does Julia solve the two-language problem?

For me, the answer is: No

Although Julia has a just-in-time compiler that can make it very efficient, it misses the advantages of compilers of statically typed languages.

The Rust compiler is a major help for writing correct code, doing refactorings and scaling a project. Compared to C/C++, it eliminates even more classes of bugs at compile time. The most important ones for scientific computing are data races, memory bugs and uncaught exceptions.

Even if you only care about performance, for the maximum performance without Julia's performance footguns, use Rust instead!

Personally, I currently use Julia for quickly testing some numerical ideas in the REPL, for some weekly university submissions and for plotting. For everything else, I use Rust. For some projects, I even use both by exporting the results of my Rust program and visualizing them with Julia. It is even possible to call Rust from Julia and the other way around.

For some projects, I even use both! I export the results of my Rust program and visualize them with Julia 😃

Is the two-language problem really a problem for developers?

I don't think so. Statically and dynamically typed languages have their own strengths and use cases, especially for scientific computing.

I am very happy about having Julia replacing Python in scientific computing and Rust replacing C/C++ (not only in scientific computing 😉). It is a needed evolution of programming languages.

It is not a war. The two languages should coexist. And I will continue pushing both of them 🥰


Appendix

JET.jl

After releasing the blog post, some people pointed out that I could have used JET.jl which can detect the errors in my examples using a static analysis.

I know about JET and I think that it is an improvement. But it will only detect some errors.

Julia's second selling point on its website is that it is dynamically typed which is good for interactivity and flexibility. But a dynamically typed language can't be fully analyzed with a static analysis tool.

Let's take a look at the following example:

function test()
    v1 = [] # Vector{Any}
    v2 = [1.0]
    push!(v1, v2) # v1 is still Vector{Any}

    last = pop!(v1) # Any
    println("OK") # OK
    println(last.pop()) # Runtime error 💥
    println("No problem!") # Not reachable

    nothing
end

As the comments show, the empty vector v1 has the type Vector{Any} which is not only a performance footgun as mentioned in the post, but also a static analysis killer.

last has the type Any because it was popped from Vector{Any}. We can see in this trivial example that last should be v2 which is a vector of floats. But this knowledge can't be derived by Julia.

last being Any means that Julia and therefore JET can't know if it supports .pop(). This will only be determined at runtime which will lead to a runtime error 💥

Let's test the analysis of JET on our function test:

julia> @report_call test()
No errors detected

No errors detected doesn't means that no errors exist as you can see when we run the function:

julia> test()
OK
ERROR: type Array has no field pop
Stacktrace:

A language that offers the TOP type (which is Any in Julia) can't have a full static analysis like Rust as a language with a strict type system.

Even JET is transparent about this in its README:

"Note that, because JET relies on Julia's type inference, if a chain of inference is broken due to dynamic dispatch, then all downstream function calls will be unknown to the compiler, and so JET cannot analyze them."


Credits

I want to thank the members of my lovely local Rust group for their support, feedback and corrections; especially Dr. Michael Distler.

There has been a long discussion about this blog post on the Julia forum. Some corrections have been made during that discussion. Thanks to the Julia community ❤️

You can suggest improvements on the website's repository

Content license: CC BY-NC-SA 4.0