What is the difference between this and libcamera? From the page:
There are already other libraries for camera support on Linux. You can use the V4L2 APIs directly, or use libcamera, or libmegapixels.
They all strike various middle points on the power vs user-friendliness scale. Having worked with all of them while developing camera support for the Librem 5, I never got the impression that any of them are particularly easy to use.
Libobscura is an experiment because it tries to find an API that fulfills the needs of most people and remains hard to use wrong.
First, libobscura doesn't yet fully support even UVC webcams,
and second, related to the quote, is that you will not run into segfault with libobscura no matter how hard you try.
When using libcamera, the task of memory management is on you, with the usual consequences.
There are more smaller differences. Image processing in libobscura is on the GPU from day 1. Contributing to the project is through codeberg, not a mailing list. The internal architecture differs, although that's not too visible.
Future goals may end up diverging, too. I'm thinking of a completely different approach to configuring devices and a different governance structure.
This looks cool. I'm looking forward to seeing the progress on the GPU acceleration.
I'm a maintainer for openpnp-capture and I've always had the thought of hardware acceleration in the back of my mind, but I didn't really know how I would go about it. Reading the comments on using shaders for color space conversion really opened my eyes.
I like that this is in Rust. It should make it very portable over time and Rust's dependency management and builds should make it a heck of a lot easier to distribute on multiple architectures (if you intend to).
Passing data from camera to OpenGL broke my mind. The code is still far from what I'd call nice, but feel free to steal anything you want!
Once I verify that debayering still works (I originally tested it years ago), GPU progress will mean calculating various statistics - the color balance, brightness, contrast for focusing, histograms, etc. Then feed them to control algorithms to adjust the camera controls.
What is the difference between this and libcamera? From the page:
There are already other libraries for camera support on Linux. You can use the V4L2 APIs directly, or use libcamera, or libmegapixels.
They all strike various middle points on the power vs user-friendliness scale. Having worked with all of them while developing camera support for the Librem 5, I never got the impression that any of them are particularly easy to use.
Libobscura is an experiment because it tries to find an API that fulfills the needs of most people and remains hard to use wrong.
There are two main differences.
First, libobscura doesn't yet fully support even UVC webcams,
and second, related to the quote, is that you will not run into segfault with libobscura no matter how hard you try.
When using libcamera, the task of memory management is on you, with the usual consequences.
There are more smaller differences. Image processing in libobscura is on the GPU from day 1. Contributing to the project is through codeberg, not a mailing list. The internal architecture differs, although that's not too visible.
Future goals may end up diverging, too. I'm thinking of a completely different approach to configuring devices and a different governance structure.
> When using libcamera, the task of memory management is on you, with the usual consequences.
huh ? libcamera's API uses C++ shared / unique pointers pretty thoroughly, at no point you should be managing memory manually
This looks cool. I'm looking forward to seeing the progress on the GPU acceleration.
I'm a maintainer for openpnp-capture and I've always had the thought of hardware acceleration in the back of my mind, but I didn't really know how I would go about it. Reading the comments on using shaders for color space conversion really opened my eyes.
I like that this is in Rust. It should make it very portable over time and Rust's dependency management and builds should make it a heck of a lot easier to distribute on multiple architectures (if you intend to).
Looks awesome, good luck!
Passing data from camera to OpenGL broke my mind. The code is still far from what I'd call nice, but feel free to steal anything you want!
Once I verify that debayering still works (I originally tested it years ago), GPU progress will mean calculating various statistics - the color balance, brightness, contrast for focusing, histograms, etc. Then feed them to control algorithms to adjust the camera controls.
[dead]