You are not logged in.
Pages: 1
Different applications have different focus.
Except general features which are easy to compare. What makes nanoDLP unique compare to the other available solutions is also its biggest weakness.
Almost all of the other solutions run on desktop environments, some of them are cross platforms as they use Java or Qt c++.
But in contrast to them, nanoDLP does not run on desktop environments. Some of the most critical parts directly talk with firmware layer.
So as a result:
- Portability: nanoDLP is not portable. Only runs on rpi linux.
+ Speed: Fastest to change layers, under 10ms delay without caching. Other solutions without caching on rpi around 500ms (approx.) and on a fast desktop 100-300ms delay. If you want to do something very quick, it is the only solution which is freely available and currently optimized for such speeds.
+ Synchronization: Due to its high performance nature, it could sync movements and image displays without noticeable delay. So instead of putting predefined delays it could talk with 3rd party boards. Some SLA manufacturers patched their arduino board firmwares to take advantage of this ability.
+ Reliability: It uses headless version of linux which is used for servers. In contrast to desktop linux or windows etc. I am sure you can keep any stable version of nanoDLP on rpi online for a couple of years without requiring to restart or reboot, Which is very unlikely for any desktop application.
Offline
The last point, reliability, is actually wrong. First, it could be more accurately be termed "availablility", not "reliability". Second, as it is based on raspbian, and raspbian does get updates, users should periodically ssh into the box and do:
sudo apt-get update
sudo apt-get dist-upgrade
sudo shutdown -r now
to fetch and apply updates, then reboot. Failing to do so leaves security vulnerabilities unpatched. If your machine never connects to any computer networks, then you don't need to do this (and, indeed, can't because it requires a network connection to do so), but otherwise you still have a nonzero attack surface.
Offline
mattcaron,
You are right, last point looks like describing availability but in this case technical choices such as using highly reliable headless Linux distro will effect whole system reliability.
Availability alone is not useful in this case if system segfault and display wrong data, it would not be acceptable at all.
The point is that even if the software is very professionally written but runs on something like X11 or windows, you should expect degree of failures which effects both availability and reliability of the system.
You are right again, specially if the device is using public IP, but nowadays almost all upgrades could be done without reboot. Earliest time which you would need to reboot system for dist-upgrade will be 2020 (raspbian jessie).
Before that plain upgrade will be enough.
Offline
Unless raspbian is using ksplice, kernel uprades will need reboots.
Further, if glibc is replaced (such as is going to be common given the recent glibc vulnerability) a reboot is best to ensure system stability, else some apps will be using the old version and some apps will be using new version.
Finally, I design highly reliable/available systems which run X11 and do not crash. The only time they get rebooted is when Ubuntu issues updates that require it (i.e. the /var/run/reboot-required file exists), which is about once a month. Based on my calculations, that is 99.988% uptime (30 days * 24 hours a day * 60 minutes per hour = 43200 minutes. Reboots take 5 minutes. 43195/43200 = .999884259. This is perfectly acceptable uptime for servers, and well more than enough for a printer, even a commercial one.
Please note - I'm not questioning your design choices, I think you made the correct ones with regards to NanoDLP. I'm questioning the rationale behind it.
Offline
As said, a chain is only as strong as its weakest link.
You have achieved great uptime. I agree it is more than enough for the printers.
We have tried different implementations for display module, X11, opengl without x, openvg without x, framebuffer and at last our current implementation which we talk directly with firmware blob.
Our main reason was to improve performance, as the original rpi was much slower than current one.
Unfortunately something which works reliably on x86 could fail miserably on ARM. Our early builds (up until #400-500) were based on python. We had lots of crashes, segfaults some of them very hard to trackdown mostly random.
Even now we do not know why uwsgi goes down so randomly on arm. We do not use 3rd party board to control hardwares everything being done on rpi so lots of stress on rpi.
Eventually we have moved away from python. Lots of things changed but still we have lots of codes remains handling probable crashes and aftermaths to keep position and other details in case of emergency to not lose whole print. It could recover prints.
Currently we have statically linked single binary with bare minimum dependency to external programs. Our crash rate went from around 2% to zero, as I know we have not experienced any problem during at-least last 1000 production prints.
As you probably have seen the same nanodlp is running on our server as demo app during last two months. http://www.nanodlp.com:8080/
At the end I think it's worth it.
Offline
I do embedded ARM systems as well, but you're right - the uptime figures I posted were for x86. My embedded systems generally have an even better uptime. But, they have to, since they run nuclear reactors and power plants. I don't have hard uptime figures for ARM, because I reboot mine too often. But, customers in the field only reboot them when the power goes out or they need to reload firmware.
Offline
Pages: 1