Tensorflow package missing

I was looking at some tutorials dealing with Machine Learning and rapidly you have to install tensorflow.
Unfortunately, it seems to be complicated to find a package on aarch64. Manjaro x64 does have “one” (many as I can see).
Has anyone tried to build one for our architecture ?
I’ve seen that some did for other distro : Releases · KumaTea/tensorflow-aarch64 · GitHub
Maybe it can help.

What do you mean “other distro”? They provide wheels to install for your architecture, right? What happens if you install them?

1 Like

So, I try as they said in a venv :

pip install tensorflow -f https://tf.kmtea.eu/whl/stable.html

It has download a lot of depencies and downgrade numpy to 1.19.2 ! :frowning:

It seems to work for now but I’ve made very short tests. There is a warning according to this version of numpy which can cause runtimeerror.

I’m not very found of this way of doing because Manjaro x86 has some packages and maintain it and I don’t know what KumaTea they put in his package.

It’s a really quick workaround not a complete solution according to me.

Tensorflow requires nvidia drivers to build. Those drivers are not available in the aarch64 repo.

You’re right but it seems that not mandatory in general.
For instance, the KumaTea succeded to build it for aarch64 without nvidia drivers : tensorflow-aarch64/build at main · KumaTea/tensorflow-aarch64 · GitHub

So, I guess that you may have a different PKGBUILD and a different package name tensorflow-aarch64 ?

According to your link, the machine building the package needs over 12 GB RAM. Which makes this package very hard to build and maintain on ARM.

Arch maintains it, not Manjaro.

How useful is tensorflow on arm boards? Does it not require a lot of resources to run?

KumaTea says he can compile on his Pi

But for your packages, don’t you do cross platform compilation ?

That’s what I wanted to see

But you’re maybe right that may not have so many use cases.

No. We do native builds on our CI devices.

Running inference on a pretrained model should still be fast. All Android phones are running it that way (Google Photos, Gboard, Google Lens, Assistant, etc.) or rather probably the lite version but still, it should be possible.

TensorFlow requires numpy 1.19.2, I don’t know why (and even how) that person on the linked repo built it with another version.

There is a discourse page for the TensorFlow, maybe you’ll receive more help on your error there. Compiling TensorFlow on your own is horrible, even with a lot of ram and many CPUs, you should only do that if there is really no other way.

The Nvidia drivers are not needed, you can do anything on a CPU.

That’s what I thought and I’ve seen differents tensorflow package on the x64 branch and some where named -cuda which made me think that the ones without it were usable/built without nvidia dependencies, I was wrong… :cry:

What is strange is that I’ve installed it and it downgraded my venv numpy version from v1.21 to v1.19…

I was not only willing to run pretrained model but also be able to train somes. I still don’t know if it’s usable or not.

If anyone knows some benchmarks or wants to compare some training on other architecture… thank you for making yourself known :yum:

I haven’t tested it yet but to be honest with what @mithrial suggested most of people won’t use their SBC to train models.

If you’re interested in using tensorflow, you’ll probably need tensorflow lite to run inference on mobile and edge devices as written on the the developper site developers google on device ml.

I have found an AUR package python-tflite-runtime that sounds interesting because you can find how to install in your own virtualenv if needed with pip (for instance)

pip install --extra-index-url https://google-coral.github.io/py-repo/ --no-deps --target=“tflite_runtime” tflite_runtime==2.5.0.post1