You are not logged in.
Cool! This is getting there!
Here's some feedback:
* It threw me off a little that when you press enter in a cell it generates the mask. I finally realized I have to type a value and wait (without pressing enter) so the grid DLP cell changes without generating the mask. The up/down arrows are nice (just really small on my 4k screen)
* I like the blue outline, except is is really hard to see at full brightness (value=256)
* Can you persist/store the cell values? This would allow the user to go in/out of mask generator to do things like turn on/off projector w/o losing settings, or come back later to adjust settings a little.
* When the mask generates, it is "blocky" (ie not a useable mask) Do you have plans to use an image resize to create the mask?
* Nothing seems to happen when I click [save mask]. I expected it to change the mask on the Setup...Projector Mask screen.
I hope that is helpful!
Regards
Larry
Not yet. I can try that later this afternoon (it's 5:30am).
My sensor read values from 0.6 to 3.81 across the surface of a DLP grid. I found that the sensor readings were non-linear and I had to experiment and use a regression generator to get a formula to get the right greyscale values. Here is the formula:
greyscale = 20* (-0.1282*x^5+1.6196*x^4-7.843*x^3+17.8275*x^2-16.3317*x+4.75)
Even that formula was not perfect.
The problem is neither you nor the user will know the linearity formula for their sensor.
Using my method, you do not have to know the formula because the user will enter values (1-256) in a cell to get the right greyscale value so their sensor shows the right value.
If you use your method, you will need a way for the user to enter a linearity formula. Then they will have to experiment over and over entering different formulas until they get the right formula.
I suggested my method to avoid this challenge of having to determine a linearity formula.
I agree having correct brightness will be helpful but we do not know range of values users will going to enter so until all square filled we could not display brightness correctly. Also I believe it will be confusing for the users. Instead I have made preview button display very fast to help measure correctness of values.
I'm not understanding why the corresponding brightness (greyscale) cannot be displayed correctly on the DLP grid square when a user types a value between 1-256?
With the current method, I think the problem is going to be the iteration time. If the user has to push the preview for every value they enter so they can take a measurement with their sensor, it will take a long, long time to determine each cell value. For one cell, the user will have to enter a value, generate the mask, measure a value, over and over until the sensor reads the right value for that cell position. In addition, the mask won't be right because the values in the adjacent cells may not be right yet.
If the user enters a value and that changes the greyscale immediately on the DLP square the iteration time is much faster.
But, I also agree that the interpolation is going to make the brightness be different than having the square all one greyscale value. Although, the result may be close enough without worrying about the interpolation.
So, I have an idea that might make mask generation fast, that may also help the issue with iteration time. If you have access to a image re-size function, you can make a small image using the measured values (10x5 pixels), then use re-size to scale it up to the 1920x1080. I have found when experimenting that re-size is very fast and performs interpolation automatically.
Example of 10x5 image
Example of 192x108 image (your site doesn't allow 1920x1080 images)
I used this site to re-size the image
http://resizeimage.net/
If the mask generation algorithm is fast enough, then you could apply the mask to the grid displayed on the DLP whenever the user entered a value in a cell (AND each pixel on the mask with the grid image).
Here is a screenshot of the up/down arrows
So I gave it a shot. Here's my feedback...
* I clicked on the [Turn Projector On], then [Display] projector calibration grid, then [Mask Generator] button
* It stayed on the projector calibration grid
* then clicked in a visual grid cell, then the grid came up with that cell highlighted, then a couple of seconds later the mask came up
* then clicked in a visual grid cell, then the grid came up with that cell highlighted, but the grid stayed up this time
* then typed 256 into the grid cell (I used the middle cell) and hit [enter]
* I expected the square on the DLP to go black, but it did not seem to change intensity at all
* Then I clicked [Preview], nothing happened, I expected the mask to display on the browser or the DLP.
* I waited for about a minute and then the mask came up with a square cell darkened in the middle of the projection (I was expecting a dark point circularly fading about 1/2" in diameter)
* The DLP flickered several times while I was typing this out
Suggestions:
* Darken/lighten the square of light real-time each time as the user types a value in a corresponding cell (the user will have their light sensor on that spot to see if it is the right value before adjusting the next cell value)
* Highlight the selected cell by some other indicator than brightness (for example, put a red dot by it, or a red rectangle around it). In this way, each square on the DLP reflects the brightness value the user indicates in each cell. With this capability, the user can verify adjacent cells, or compare adjacent cells.
* The up/down arrows are really nice, but they are so small on my screen, I can't click on them (I have a 4K screen)
* When generating the mask, assume the center pixel on the square is the value in the cell, then interpolate from that point to the values defined by the center of each adjacent cell). The excel sheet I provided has an interpolation algorithm that you may be able to use.
* Have some sort of progress bar to indicate the mask is being generated
* Kill the mask generation if they move off the mask generation form (so it doesn't pop up suddenly when they are doing something else)
* Maybe allow mask generation to be cancelled?
I hope these suggestions are helpful!
I'm really excited to see this feature being implemented!
Super! I'll try to carve out some time tonight to give it a try!
I understand now. Thank you.
Sounds like you have a good plan.
I'm really anxious for this feature and I think it will put nanodlp at the cutting edge of DLP mask creation!
I'm totally supportive of #5 visual grid
On #7, (1) having higher capability to define greyscale is ok (24 bit verses 8 bit), (2) I don't know, (3) I don't know
On size of measurement point, the original article uses 1/2" diameter dots. The sensor itself is about 1/8"x1/8" square. I would recommend 1/4" or 3/8" square dot.
Note on #5: The user could press a [next] button or enter a grid position number
Note on #6: displays grid position #1 (if [next] is pressed) or the grid position number entered
Note on #7: may need to be 1 ~ 1024 (I'm not sure)
Note on #1: I would suggest allowing them to enter 2 values: (1) # of points for height (minimum 3, default 5), and (2) # of points for width (minimum 5, default 9). This way they don't enter something that doesn't make sense.
I just saw the plan. I've been out-of-town. The challenge I have had is determining a mask from the measured points. I have found that the formula to go from measured points to a working mask is non-linear. I ended up using a 5th order regression to try to get a mask that makes all the points the same measured value.
To avoid all of that, I would suggest the following:
1. User decides how many points she/he wants to measure
2. Nanodlp displays grid based as an image (with no mask applied) on the projector, and displays a edit field for entering the dimmest values on the UI
3. User measures points to find the dimmest value
4. User enters dimmest point value into interface
5. User selects next input (or chooses a particular grid position - there may only be one wrong position that needs to be corrected)
6. Nanodlp displays only grid position #1 & user places sensor on grid position #1
7. User enters a value between 1 to 256 (1=full brightness, 256=black) to dim the grid position until the sensor reading matches the dimmest value
8. User continues steps 5 ~ 7 for all points
9. User could press display mask anytime and mask will be displayed (allow to choose with or without grid)
10. User could check if whole surface brightness is same or continue to tweak values
11. User click on export and mask will be saved
The difference here is that when you calculate the mask, you already know how much to dim at each grid position. You will not need to figure out some special formula to create the mask.
I hope you don't mind me throwing out another concept idea...
For the UV sensor, rather than taking measurements and entering all that data in, here is another possible method:
For each grid section, allow the user to adjust the grayscale value being displayed (maybe one slider for big changes and a small slider for small changes)
The user could simply change the sliders on each grid section until the UV Sensor displays the value they want. The user could indicate [done] and have that show on the UI. Perhaps the user could select the section they are working on, or the UI enforces going in a particular order.
Then you could use the grayscale values to create the mask. Seems like it would be pretty accurate too. Thoughts?
As a side note, it would be really helpful if there were a way to regenerate the plates using the latest uploaded mask (so you don't have to re-create plates every time you change the mask). Even better if you could have multiple masks and select the mask on the plate (for example, you could use a brighter mask if you are only using the center portion of the plate for the print for faster prints). Just an idea. Thanks!
Thanks for such a quick reply! Apparently that's exactly what happened (firefox). I refreshed the browser and voila it shows the right mask.
I'm trying to replace the mask with a new mask file. If I upload a new file, it defaults to the original mask file (even if I remove the original mask file first). I've tried several png files with no success. Any suggestions?
I suppose one could measure the stumps on the build plate with a caliper and enter those values in.
Here's another concept for auto mask generation that may be very simple to implement and easy for the user... Internal to NanoDLP, generate slices of a model that is simply a series of 2mm posts in a grid that covers the entire build area. After the burn-in layers, print the first layer starting at a high cure time and then sequentially lower the cure time for each layer printed (you would have to keep peel speed and height constant). My assumption is that as the cure time decreases, eventually the posts that are getting the least light will break away from the model and stick to the vat bottom. The camera could detect this ("fallen part detection") and record the location and cure time. Eventually every post would fail. With this information you could use interpolation to create a mask. The user does nothing except clean the vat of the multiple "fallen parts" at the end. I'm not positive this will work, but I thought I would throw it out there. Thoughts?
The grid sample is the uvcal.stl document that can be downloaded here: http://www.buildyourownsla.com/forum/vi … 684#p11719
Three data samples (DS1, DS2, DS3) are this OpenOffice document: bicubic-interpolation-larry-v1.ods
That file also gives an example of bi-cubic interpolation. It was really slow, and I honestly don't remember if I left it in a working state.
Just another quick thought... if you project red squares separated by a grid (same size grid as for the UV sensor), then the algorithm could determine the average intensity within each square, and you could feed that into the mask generation algorithm used for the UV sensor.
I will add both feature to my task lists.
Very Cool! These will be great features!
If when entering the data you can tab sequentially from field to field, I could potentially modify the uno to be a HID keyboard. When you measure a value by clicking an uno button, it would "type" the measured value in the field and then "press" [tab]. So it could automatically sequentially fill out the fields. http://mitchtech.net/arduino-usb-hid-keyboard/
I like the possibility of the following:
* Low resin detection
* Uneven platform detection <= Maybe make this a homing detection too? On muve3d printer this is a common issue and you could potentially automatically home both sides.
* Fallen part detection <= This is pretty common and wastes a ton of time waiting for a print that is already ruined.
I hope it will be possible to measure intensity of UV range light by measuring other part of light spectrum too (I guess blub light output should homogeneous at least for light spectrum range). We do not need absolute intensity value just relative value will be enough.
Interestingly, projecting red (600nm) may be a better relative approximation of the UV band (400nm) than white light (multiple components across the spectrum). See the spectrum output for the muve3d projector: http://www.muve3d.net/press/projectors/
On banding...
I wonder if the vat bottom is acting as a prism and splitting the white light into its component values as it is reflected from both the bottom and top surfaces of the glass? See http://science-edu.larc.nasa.gov/EDDOCS … olors.html. If this is the case, then projecting red may prevent the banding.
A means to generate the grid, enter data, and generate a mask would definitely be useful when using a UV sensor (even if those values are entered by hand). This is measuring the direct light intensity from the projector at the vat floor where the resin will cure.
I really like the concept of the alternative as well. Here are some questions:
* What is "banding"?
* Would the RPi camera take the photo straight down onto the vat floor? Or, from an angle? Or, from the backside of the vat?
* Would you put a piece of paper down on the vat floor to block most of the intense light that might overwhelm the camera sensor? (I've seen forums describe this technique)
Here's what I really like about the alternate idea:
* Use a simple RPi camera to make a mask (potentially with a filter as described below)
* The real-time focus feedback
* Diminishing bulb intensity feedback
* You could potentially detect the projector length and width to dial in the right values for the slicer
Here are some thoughts:
* Most resins cure at around 400nm frequency light. If a band pass filter for UV light were used on the camera lens (or UV sensor), that may help make the mask more accurate.
* See this link for a cheap UV filter, but make sure to read all the comments because there are some disadvantages: http://www.instructables.com/id/Photogr … /?ALLSTEPS
* See this link for a 400nm band pass UV filter for $30 -- check for others as well (http://www.edmundoptics.com/optics/opti … ers/28404/
* An expensive CCD camera in the right UV range: http://www.edmundoptics.com/cameras/nir … ras/56346/
Interesting. Thank you for that description. That's smart.
Allow the user to create a mask with the following method:
1. Determine and project a grid pattern that covers the projection area
2. Project one at a time, and sequentially a 1 cm dot for each grid point intersection
3. The user places a UV detector over the dot and clicks a button to send a serial stream to nanodlp containing the magnitude of the UV light at that point (see http://www.buildyourownsla.com/forum/vi … f=5&t=3684). You define the format, pins, baud rate, etc to use and I'm sure we can modify the arduino uno code to provide it to nanodlp on the raspberry pi. Note that the grid pattern can help the user align the sensor for more accurate results.
4. Repeat steps 2 & 3 for 3 cycles
5. Average the data from the 3 cycles after throwing out outliers
6. Calculate a mask based on the averaged points. One idea is to use bicubic interpolation (see https://mathformeremortals.wordpress.co … ys-ranges/). I have used this to generate a 192x108 mask. It works, but it is really slow. So perhaps there is a better method.
Another thought I had was you could allow mask validation by:
1. Performing steps 1 to 4
2. Display the results to the user (perhaps showing std deviation of the rows and columns). If the mask worked, you should see a fairly even distribution.
And another thought:
1. Use the mask validation to determine suggested cure times (This may take some experimentation, but perhaps allowing a formula so the cure time can be calculated from the mask values)
Is there any information somewhere that describes: What exactly is Pixel Dimming? How/why does it work?
Curious how difficult it would be to implement nanodlp commands as described in the discussion on this link?
https://groups.google.com/forum/#!topic … ahWrpfScvI
This would give the possibility of experimentation with force feedback and could potentially morph into a real-time capability.
Thoughts?
Great! Thanks! I'd be glad to help too when the wiki gets posted.
There are so many fantastic features to nanodlp. Most of the features are fairly intuitive. Others I would love to learn more about. Is there a manual that describes all the features in nanodlp?
Thank you!