Rust vs Julia in scientific computing

Tags: #rust,#julia

Reading time: ~32min

One of the main objectives of Julia is solving the two-language problem. This means that by using Julia, you don't have to prototype in a dynamic language like Python for flexibility and later rewrite the code in a compiled language like C/C++ for performance.

This goal impressed me while picking a programming language for my bachelor's thesis in physics. But after regularly using and even teaching Julia, do I still think that Julia solves that two-language problem?

And why do I think that, in some cases, you should use Rust instead?


This blog post is the base for my tiny talk at Scientific Computing in Rust 2023.

If you prefer watching a short video as a trailer, you can watch the the recorded talk. But the blog post has many more details and aspects that can not fit into 7 minutes.

⚠️ Warning ⚠️

I have to warn you, I am in love with Rust!

Therefore, this post is biased towards Rust. But I actually use Julia regularly and I even spread it at my university by teaching a vacation course about it. I just think that the promises of Julia can be misleading and there are use cases where you should just use Rust over Julia. Read more to find out why.

The code examples were tested with Rust 1.71.0 and Julia 1.9.2.

Landscape mode recommended on mobile devices

Fearless concurrency*

Julia makes multithreading very easy. In fact, multithreading in Julia is a matter of adding the @threads macro in front of a for loop!

Although Julia makes multithreading easier, it doesn't make it any safer! Let's take a look at the following known example:

function unsafe_count()
    counter = 0

    Threads.@threads for _ in 1:10_000
        counter += 1


If you are not familiar with multithreading, you would expect that the result should be 10_000. But if you run it multiple times, you will get an output similar to the following:


The output is random because of data-races.

This data-race happens because a thread reads the current value of counter, adds 1 to it and stores the addition result in the same variable. If two threads read the variable at the same time and then add 1, the addition result would be the same and they will both store that result which means that we loose one addition.

The following demonstrates the scenario without a data-race:

counter | thread 1  | thread 2
3       | read 3    |
3       | 3 + 1 = 4 |
4       | write 4   |
4       |           | read 4
4       |           | 4 + 1 = 5
5       |           | write 5

In the case of a data-race, an addition is lost:

counter | thread 1  | thread 2
3       | read 3    |
3       |           | read 3
3       | 3 + 1 = 4 | 3 + 1 = 4
4       | write 4   |
4       |           | write 4

Let's translate the Julia code into Rust:

use rayon::prelude::*;

let mut counter = 0;

(0..10_000).into_par_iter().for_each(|_| {
    counter += 1;

We use the rayon crate which offers easy multithreading using iterators.

Fortunately, the Rust code above will not compile! ❌

In Rust, either you have only one mutable reference and no immutable ones or (XOR) you have any number of immutable references but no mutable ones.

For a data-race to happen, you need to have more than one mutable reference. For example, in the Julia code above, every thread has its own mutable reference to the counter variable! It is proven that Rust's type system and borrow checker make data-races impossible!


*: With Rust, you almost have fearless concurrency. Data-races are impossible, but you still have to fear deadlocks!

This blog post is about safe Rust, which means Rust without using unsafe.

To make the Rust version compile, we need to either use a Mutex or an atomic. Atomics guarantee on the hardware level that their supported operations are done atomically, which means in one step! Since atomics have better performance than a mutex, we will use AtomicU64 (unsigned integer with 64 bits):

let counter = AtomicU64::new(0);

(0..10_000).into_par_iter().for_each(|_| {
    counter.fetch_add(1, Ordering::SeqCst);

Note that counter is not mutable anymore! There is no more mut after let. Since operations on atomic types guarantee to not introduce data-races, they take an immutable reference &self instead of a mutable one &mut self. This allows us to use them on multiple threads (because it is allowed to have multiple immutable references).

Of course, the atomic Rust version above returns 10_000 🎉

If it compiles, it is data-race free 😃


Discussing Ordering::SeqCst would go beyond the scope of this blog post. You can read more about atomic memory orderings in the documentation.

The correct Julia version looks very similar to the Rust version:

function safe_count()
    counter = Threads.Atomic{UInt64}(0)

    Threads.@threads for _ in 1:10_000
        Threads.atomic_add!(counter, UInt64(1))


This means that Julia does have atomics too. But it is not able to detect a possible data-race to recommend using them or at least just warn us.

Julia's multithreading documentation states: "You are entirely responsible for ensuring that your program is data-race free [...]"

Moore's law is almost dead, at least for single core performance. Therefore, we need a language that makes concurrency not only easy, but also correct.

Project scalability

How hard is it to maintain, extend and reason about the correctness of Julia code while a project grows?

Static analysis

Highly optimized Julia code can get close to the performance of Rust because of Julia's just-in-time (JIT) compilation.

But producing optimized machine code is not the only purpose of compilers. Julia misses a very important advantage of a real compiler: Static analysis!

Take a look at the following example in Julia:

v = [1.0]



println("No problem!")

Did you find any problem? Well, Julia doesn't see a problem in it until it reaches the problematic line!

If you run the code above, you will see OK printed out before you get an error because we used Rust's syntax for pop. We should have used pop!(v) in Julia.

You might think that this is fine, a simple test run will find this bug.

But what if the buggy code is behind some condition that is dependent on the program input or which is just random like in a Monte Carlo simulation? Here is a demonstration:

v = [1.0]


if rand(Bool)

println("No problem!")

If you run this Julia code, you should have about 50% chance of just passing by the buggy block and printing out No problem!.

Well, this is a problem, a big one! Such a type bug can be prevented by a simple type system with static analysis.

Why am I talking about static analysis under the topic of scalability?

Let's say we are writing some molecular dynamics simulation. Take a look at the following example:

particles = [[1.0, 2.0], [2.0, 3.0], [42.0, 35.9]]

for particle in particles
    distance_to_origin = sqrt(particle[1]^2 + particle[2]^2)
    println("Particle's distance to origin: $distance_to_origin")

center_of_mass = sum(particle for particle in particles) / length(particles)
println("Center of mass: $center_of_mass")

We create some particles by storing their positions in a vector. As two placeholders for some computations, we calculate their distance to the origin and their center of mass (assuming that they all have mass 1).

Let's say that, later on, we want to take the charge of particles into account for a better accuracy of our simulation. Therefore, we create a struct called Particle storing the position and charge:

struct Particle

particles = [Particle([1.0, 2.0], 1.0), Particle([2.0, 3.0], -1.0), Particle([42.0, 35.9], 0.0)]

for particle in particles
    distance_to_origin = sqrt(particle[1]^2 + particle[2]^2)
    println("Particle's distance to origin: $distance_to_origin")

center_of_mass = sum(particle for particle in particles) / length(particles)
println("Center of mass: $center_of_mass")

We changed the content of the particles vector from positions to instances of Particle.

We don't use the introduced charge yet. We just want to make sure that we didn't break anything.

We run our code and get an error because we are now trying to index into the Particle struct instead of a position vector while calculating the distance to the origin.

No problem, you might think. We just forgot to adjust that line. We can just fix it and run the code again!

for particle in particles
    distance_to_origin = sqrt(particle.position[1]^2 + particle.position[2]^2)
    println("Particle's distance to origin: $distance_to_origin")

Are we done now? If we run it, we get another error. We missed one more line where the center of mass is calculated!

We can fix it easily like the following:

center_of_mass = sum(particle.position for particle in particles) / length(particles)
println("Center of mass: $center_of_mass")

But how long will you stay in the cycle of running and fixing after a change in a bigger program?

Will you be sure that you didn't miss any line after your code runs without errors?

Such changes that affect a relatively big part of a codebase are called refactorings.

Refactoring in Rust is a smooth compiler driven process. The compiler will throw an error for everything that you didn't adjust yet. You just work through the list of compiler errors. After solving that puzzle, your program compiles and you can pretty much be sure that you didn't forget anything!

No errors at runtime!

Of course, this doesn't mean that you could have forgotten something related to the logic of your program. You should have some tests for that!

When you write tests in Rust, you test the logic of your program. You should just make sure that you still get the expected output for specific inputs. But you do not test if your code has systematic bugs or possible crashes.

We could build a linter for Julia, just like the many linters for Python. Tools like linters for dynamically typed languages will not reach the power and correctness of a static analysis built on a well typed language. It will just be like putting more cement on a fragile base just to make it a bit safer.

Error handling

In the last section, we discussed systematic bugs that can be detected by a static analysis.

What about errors that can not be directly detected at compile time?

Julia offers exceptions for dealing with such cases. How about Rust?

Option: To be or not to be, that is the question

What happens when you run the following code in Julia?

v = [1.0]
pop!(v) * pop!(v)

Well, there is only one value, therefore the second pop! will fail. But how?

It will fail with an error at runtime 💥

Can Rust prevent that? Let's take a look at the signature of pop for Vec<T> in Rust (Vec is a vector, T is a generic):

pop(&mut self) -> Option<T>

It takes a mutable reference of the vector holding values of type T and returns an Option<T>.

Option is just an enum, a very simple but powerful one!

The definition of Option in the standard library is the following:

enum Option<T> {

This means that an Option<T> can either be None or Some with some value of type T.

Let's see how the above Julia code would look like in Rust:

let mut v = vec![1.0];

v.pop() * v.pop()

If you try to compile it, you will get a (normally colored) lovely error message like the following (Rust has the best error messages 😍):

error[E0369]: cannot multiply `Option<{float}>` by `Option<{float}>`
  --> src/
18 |     v.pop() * v.pop()
   |     ------- ^ ------- Option<{float}>
   |     |
   |     Option<{float}>

The easiest way to handle an Option is to unwrap it:

let mut v = vec![1.0];

v.pop().unwrap() * v.pop().unwrap()

The behavior of unwrapping a None is just that of Julia: It will panic at runtime.

You might think that we at least made a possible panic explicit, right?

But you should not use unwrap in production code. In Rust, you should do proper pattern matching:

let mut v = vec![1.0];

let v1 = match v.pop() {
    Some(value) => value,
    None => 1.0,

let v2 = match v.pop() {
    Some(value) => value,
    None => 1.0,

v1 * v2

We use pattern matching to handle the Option. In case the Option is None, we use 1 as the neutral element of multiplication.

You might think that this is a lot of boilerplate code! You are right! But it was only a demonstration of pattern matching to understand how handling an Option works.

The code above can be reduced to the following:

let mut v = vec![1.0];

v.pop().unwrap_or(1.0) * v.pop().unwrap_or(1.0)

The implementation of unwrap_or for Option looks like the following:

fn unwrap_or(self, default: T) -> T {
    match self {
        Some(x) => x,
        None => default,

It is just what we have done in the long version, but unwrap_or is a convenient method.

You might think that the result 1 is not what you would expect if the vector is empty. You can handle it differently. But, you see, you are just thinking about how to correctly handle cases where something doesn't work as expected! My mission is accomplished 😉

Failure is not an Option, it's a Result!

Let's say that you want to write the results of a long simulation in Julia:

open("results/energies.csv", "w") do file
    write(file, "1,2,3")

What happens if Julia fails to open the file, for example because the directory results/ doesn't exist?

You probably guessed it: Runtime error 💥

Which would mean that you loose your results and have to rerun the simulation again after fixing the cause of the error.

You could wrap the code above in a try/catch statement and maybe dump the results into /tmp instead and tell the user about it.

But first of all, Julia doesn't force you to handle exceptions. The language itself doesn't even tell you about possible exceptions, you have to read the documentation of every function you use to find out if it could throw an exception. What if a possible exception is not documented? To be really safe, you could wrap everything in a try/catch statement.

Is there something better than exceptions? Let's see the Rust version of the code above:

use std::fs::OpenOptions;

match OpenOptions::new()
    .open("results/energies.csv") {
        Ok(mut file) => {
            // Ready to write
        Err(error) => {
            // Handle the error

open returns a Result. A Result<T, E> (with the generics T and E) is the second important enum in Rust:

enum Result<T, E> {

open forces you to handle a possible IO error just like pop forces you to handle the None case.

With exceptions, you expect some value from a function and might be surprised with an exception. But with Result and Option, the type of the function's signature will tell you if an error could occur. No surprises!

Rust will not let you miss a case. It will not let you crash your program by mistake.

You can use unwrap on Result too, but then it is not a crash by mistake. You did decide that you would rather want to crash.

Again, how many times do you rerun your Julia code until no error appears? How does the time needed for this cycle scale with the complexity of the project?

How confident are you about your Julia code to not crash on some point although it ran fine when you tested it with some example input?

Rust can give you the confidence that your code is correct ✔️


Julia is very flexible with its multiple dispatch and type hierarchy. But let's suppose that a library introduces an abstract type that you can implement concrete types for.

If a function takes that abstract type as an argument, what methods does that function expect from my concrete type to have implemented?

Since Julia doesn't have interfaces yet, you have three options to find out required methods:

To be fair, interfaces are planned for the version 2.0 of Julia. But the fact that this was not a priority for 1.0 strengthens my opinion that Julia is mainly designed for interactive use cases, not for large projects.


Although there is a the 2.0 milestone for Julia on Github, Julia does not plan a 2.0 release anytime soon.

Still, I didn't find a statement that indicates that a 2.0 release will not happen. I am not sure how Julia's ecosystem will survive a transition to 2.0, independent of when this happens.

Remember the transition from Python 2 to 3!

Rust will not have a version 2.0! It has editions that preserve backwards compatibility.

On the other hand, Rust's traits show you all required and optional methods!

The Iterator trait for example has one required method which is next. If you implement it, you get all other methods for free! If you want to, you can implement optional methods like size_hint which is useful for avoiding allocations while collecting an iterator.

There is no trying out, no searching for hidden documentation that might not even exist, no reading of source code. Rust will make sure at compile time that you implemented all required methods.


As mentioned above, highly optimized Julia code can get close to the performance of Rust. But it will not reach the performance of Rust unless it doesn't trigger the garbage collector which Rust doesn't have.

But which language makes it easier to write the most efficient code?

Performance footguns

Julia has what I call performance footguns.

For example, if you initialize an empty vector like v = [], you did already degrade the performance of your code to one similar to Python's (without numpy) because your vector has the type Any. It can store any value! Therefore, Julia can not optimize this vector anymore. You either have to initialize the empty vector with a concrete type like v = Float64[] or you have to initialize it with at least one value like v = [1.0].

Julia will not tell you about such performance killers! Have fun profiling, using the macro @code_warntype while interactively calling a function, etc.

Preallocation and undefined behavior

We all know that allocations are often a bottleneck. Julia recommends pre-allocation like the following:

v = Vec{Int64}(undef, 3)

What would happen if you forget to set an undef (undefined) value and read it by mistake? One possible output of v is:


Welcome to the area of undefined behavior with uninitialized memory 😱

With Rust, you can initialize a vector with with_capacity. But it will be empty with length 0.

Capacity is not length. Capacity is the amount of data the vector can hold without needing to reallocate again. The length is the amount of data that the vector stores.

The capacity is always bigger or equal to the length. Your goal is to avoid that the length gets bigger than the current capacity because the vector will be reallocated to have bigger capacity.

Julia offers the function sizehint! to reserve capacity. But it recommends the method with possible undefined behavior instead of this function. Why? 🤔


Julia offers functions like zeros, ones and fill. But these have the overhead of overwriting the memory first.

You still get logic bugs with these three functions if you forget to set some values, but at least it is not undefined behavior.

If you care about performance, you should not use sizehint! for array preallocation. Julia recommends the method with undef for a "good" reason. Let's take a look at the following benchmarking that compares both preallocation methods:

function benchmark_alloc()
    n = 2^8
    m = 2^14

    @btime for _ in 1:$n
        v = Int64[]
        sizehint!(v, $m)

        for i in 1:$m
            push!(v, i)

    @btime for _ in 1:$n
        v = Vector{Int64}(undef, $m)

        for i in 1:$m
            v[i] = i



I know that you can do what the example does much better with a range 1:m which you can collect if you really want a vector. But these trivial examples are only for demonstration. You should focus on the effects.

Let's run this benchmarking:

julia> benchmark_alloc()
  16.949 ms (512 allocations: 32.03 MiB)
  2.866 ms (512 allocations: 32.01 MiB)

The version with sizehint! is about 6 times slower than the method with undef 🤯

The reason is that push! always does an expensive call into C code!

Does this mean that preallocation with uninitialized values is in general faster? Does it mean that we have to tolerate methods with possible undefined behavior for performance?

Let's use the first method with capacity in Rust:

use std::time::Instant;

fn main() {
    let n = 2_usize.pow(8);
    let m = 2_usize.pow(14);

    let now = Instant::now();

    for _ in 0..n {
        let mut v = Vec::with_capacity(m);

        for i in 0..m {

    println!("Elapsed: {:?}", now.elapsed());

The output is:

Elapsed: 2.518863ms

The time it takes is close to that of the method with undef in Julia because Rust doesn't abstract away the concept of capacity and manages it internally. Without possible undefined behavior 😉

Rust doesn't allow undefined behavior and allows you to get as low level as you want with maximum performance. Just check out the methods of Vec as an example of what it offers.

If you want to write highly optimized code in Julia, you have to follow all its official performance tips. You will not even get a warning if you miss some and degrade to almost the performance of plain Python (without numpy etc.).

If performance is not only "nice to have" for you, if every improvement can save you hours of expensive computation, you should better use Rust!

Language server

Even if you are like me and use an editor instead of an IDE, you should at least use a language server.

Unfortunately, Julia's language server lacks a lot of features. Rust-Analyzer offers many more features that make you much more productive. Just go through the list of features and watch the GIFs. It is just amazing!

One example is "hovering over a variable" to see its type.

In Julia, "hovering" shows you the variable's declaration 😐️

In Rust on the other hand, "hovering" shows you the type of the variable (see the GIF). Seeing the type of a variable helps you with understanding what this variable actually is and how you could use it 🧐

In Julia, you either have to read the source code that returns that variable and try to derive its type or you have to run your program while printing typeof 🫤

Maybe you can not remember the name of a specific method. You could browse the documentation, but often it is much faster and easier to just type the variable name with a dot at the end (particles. for example) and then press tab. Any further typing works as a fuzzy search! Then you pick the method and enter the parameters while the signature is shown.

The language server in Julia can show you the signature, but often it is the wrong signature because of dynamic dispatch.

I do not even want to start talking about auto-completion and code actions in Rust. Just try it out yourself!

Many of the problems are related to the dynamic typing of Julia which is supposed to be "easier" than static typing. But with the assistance of Rust-Analyzer, I can flow between types and and be much more productive in the long term 💡


Take a look at Julia's official API documentation of arrays and their manual.

Julia has sections in the sidebar which is nice. But that is pretty much it for the navigation 😐️

You can at least change the theme in the settings!

You can also search, but the search is relatively slow and there are no filtering options.

No wonder why some programmers are hyped about ChatGPT. Maybe because not only writing, but also reading documentation is often a pain?

Now compare it with the documentation of Vec in Rust.

rustdoc is an underrated piece of documentation perfection!

You can see all methods, implemented traits and even module navigation in the sidebar.

The search bar gives you the hint to press S to search and ? for more options. Yes, it has keyboard shortcuts 🤩

If you hover over a code example, a button on the upper right is shown to run it on Rust Playground which allows quick experimentation.

Code examples are automatically tested before publishing. There are no examples that are not in sync after an API change!

You can search, filter results, search for function parameters or return types, etc.

The documentation of all crates is automatically published on You can just live there the whole day. Just learn to navigate in rustdoc and you will not need an AI to gather snippets from here and there 😉

Did I mention that you can have offline documentation if you did already download the crates? Just run the command cargo doc --open. Very useful in a train for example 🚄

Oh, wait, I didn't mention the official Rust book yet? It is very well written and available online for free! If you want to learn Rust, just start with THE book 😍

Where Julia shines

OK, enough bashing against Julia. Let's see where Julia actually shines.


The second selling point of Julia on its website after performance is:

"Julia is dynamically typed, feels like a scripting language, and has good support for interactive use."

This is the power of Julia!

Although Rust has evcxr and irust, it will not even get close to the experience of the Julia REPL because Rust is statically typed.

The Julia REPL is just fascinating! It is the best REPL I have used so far. It is even much better than the REPLs in Python although Python is also dynamically typed!

You can even plot in the REPL using UnicodePlots 📈

I often launch it to do some quick calculations or generate some plot.

Rust in a notebook? There is a Jupyter kernel for Rust, but the experience is not even comparable. It is not what Rust is designed for.

On the other hand, Julia is just perfect for Jupyter notebooks! You want to do data analysis, make plots and present your results? Julia in a Jupyter notebook is what you are looking for!

Many think that Jupyter notebooks where invented for Python. But did you know that the name "Jupyter" is composed of Julia, Python and R?

You might ask, why not just use Python for notebooks?

"Julia vs Python" is another topic, but I will mention some points. Julia offers much better performance than Python, makes dealing with Arrays (vectors, matrices, tensors) much easier and has an ecosystem centered around scientific computing with many unique packages (more about this later).

Plus, Julia is written in Julia! This makes it much easier to read and contribute to the code. In Python, almost every package with good performance is written in C. Have fun reading that C code!

Pluto notebooks take the interactivity of Julia to the next level! If you update one cell, every other dependent cell is also updated automatically! Without the performance of Julia, this would not be possible!

If you are teaching scientific programming, check out Pluto notebooks! They are perfect for teaching!

In general, if you are teaching programming in a scientific context, pick Julia! It is easier to learn and work with for most beginner's scientific use cases.

Maybe offer an optional course for teaching Rust for students in the scientific field that want to write large projects like long simulations. But Rust should not be the first language to teach, unless you have students related to computer science.

A lot of scientific computing is about linear algebra, data analysis and plotting. I think that Julia with its interactivity and performance is just perfect for that.

You don't want to wait for Rust to recompile just to see how one attribute in your plot changes, you want instant updates!

Although Polars with its dataframes offers better performance than DataFrames.jl (see Polars benchmarks), I tried it and you really don't want to wait for Rust to recompile to see how your dataframe changes after changing one line. Just use Julia with Revise for that case!

Scientific ecosystem

Julia has a huge ecosystem with many scientific packages.

You get arrays out of the box and LinearAlgebra.jl is preinstalled!

Plotting in Julia is very fascinating with Plots.jl or even Makie. Makie is a whole visualization ecosystem with hardware acceleration! Just take a look at this blog post for a showcase.

Rust has Plotters which I really appreciate, but it has a long way to go and needs to receive more love! Currently, it requires a lot of boilerplate code with many manual adjustments. Julia does currently offer a much better experience for plotting.

Julia also has awesome packages for solving differential equations, numerical integration and even newly symbolic calculation. Even dealing with units and measurement errors is a dream in Julia!

On the other side, the scientific ecosystem in Rust is still rather thin. It is growing and the new conference Scientific Computing in Rust shows how people are expanding it. Of course, if a functionality you are looking for is still missing, you can contribute to the ecosystem by creating a crate or extending an existing one. But this is not always an option.

At the end, it is a chicken-and-egg problem. If people don't use a language in a specific field because its ecosystem is not that mature in comparison to another language, then that ecosystem will not get mature at all.

If you are using Julia, chances are that you used Python before. Remember that Julia's ecosystem wasn't (and still isn't) that mature in comparison to Python. But people jumped in and Julia experienced a rapid growth in its ecosystem. Rust is catching up!

Which language to use?

For scientific computing, I would recommend using Rust for projects that …

Julia is a better fit for projects that …

My personal conclusion

To get back to the initial question: Does Julia solve the two-language problem?

For me, the answer is: No

Although Julia has a just-in-time compiler that can make it very efficient, it misses the advantages of a real compiler for a statically typed language.

The Rust compiler is a major help for writing correct code, doing refactorings and scaling a project. Compared to C/C++, it eliminates even more classes of bugs at compile time. The most important ones for scientific computing are data-races, memory safety and uncaught exceptions.

Even if you only care about performance, for the maximum performance without Julia's performance footguns, use Rust instead of Julia!

Personally, I do currently use Julia for quickly testing some numerical ideas in the REPL, for my weekly submissions for lectures about numerics and for plotting. For everything else including non weekly submissions and projects, I use Rust.

For some projects, I even use both! I export the results of my Rust program and visualize them with Julia 😃

Is the the two-language problem really a problem for developers?

I don't think so. Statically and dynamically typed languages have their own strengths and use cases, especially for scientific computing.

I am very happy about having Julia replacing Python for scientific computing and Rust replacing C/C++ (not only in scientific computing 😉). It is a needed evolution of programming languages!

It is not a war. The two languages should coexist. And I will continue pushing both of them 🥰



After releasing the blog post, some people pointed out that I could have used JET.jl which can detect the errors in my examples using a static analysis.

I know about JET and I think that it is an improvement. But it will only detect some errors.

Julia's second selling point on its website is that it is dynamically typed which is good for interactivity and flexibility. But a dynamically typed language can not be fully analyzed with a static analysis tool.

Let's take a look at the following example:

function test()
    v1 = [] # Vector{Any}
    v2 = [1.0]
    push!(v1, v2) # v1 is still Vector{Any}

    last = pop!(v1) # Any
    println("OK") # OK
    println(last.pop()) # Runtime error 💥
    println("No problem!") # Not reachable


As the comments show, the empty vector v1 has the type Vector{Any} which is not only a performance footgun as mentioned in the post, but also a static analysis killer.

last has the type Any because it was popped from Vector{Any}. We can see in this trivial example that last should be v2 which is a vector of floats. But this knowledge can not be derived by Julia.

last being Any means that Julia and therefore JET can not know if it supports .pop(). This will only be determined at runtime which will lead to a runtime error 💥

Let's test the analysis of JET on our function test:

julia> @report_call test()
No errors detected

No errors detected doesn't means that no errors exist as you can see when we run the function:

julia> test()
ERROR: type Array has no field pop

A language that offers the TOP type (which is Any in Julia) can not have a full static analysis like Rust as a language with a strict type system.

Even JET is transparent about this in its README:

"Note that, because JET relies on Julia's type inference, if a chain of inference is broken due to dynamic dispatch, then all downstream function calls will be unknown to the compiler, and so JET cannot analyze them."


I want to thank the members of my lovely local Rust group for their support, feedback and corrections; especially Dr. Michael Distler.

There has been a long discussion about this blog post on the Julia forum. Some corrections have been made during that discussion. Thanks to the Julia community ❤️

You can suggest improvements on the website's repository

Content license: CC BY-NC-SA 4.0