Show HN: Kaedim – API for 3D User Generated Content
150 points by konstantina_ps 3 years ago | 40 commentsCreating digital 3D objects is getting increasingly difficult and expensive. There is a very limited supply of people who are good 3D artists, and the cost of training one is very big. It usually involves years of learning difficult 3D software. However, more and more of the digital experiences around us are turning into 3D.
I needed this product myself. The idea for Kaedim was born from a personal frustration when, 2 years ago, I was working on a project for re-creating a cathedral in 3D software for my university degree. Before being hands-on, the concept seemed straightforward to me, “the same way you draw on a piece of paper, you can also draw in 3D, how hard can it be?”. The reality shocked me. Having completely underestimated the task I found myself needing hours to model each 3D object (chairs, tables, walls) using really complicated and steep learning curve 3D software.
Every time I wanted to model something new I had to start from a cube and do all the necessary operations on it to achieve the desired shape. Over and over again. Moreover, there were many times when I would bin my creations and start from scratch for better luck. The reality with 3D modelling software is that it’s almost always easier to start from scratch than to try and fix a modelled object.
After my personal experience of the problem, I started thinking about game devs. “Game developers have this problem at scale, they need to build whole 3D worlds with millions of objects. How do they do it?”. So we started talking to them. Only to discover, there is no secret. 3D asset production is a big bottleneck for them too. There is a very limited supply of people who are really good 3D artists, and the cost of training one is very big. It usually involves years of training on difficult 3D software.
Our solution is an ML algorithm that creates 3D models out of 2D images. We are constantly training on more and more data points to improve the accuracy and we have added a Quality Control step to always guarantee a standard level of quality. We then use the QC results to train our algorithms further.
Artists and game devs have used Kaedim so far to quickly prototype, create and iterate their 3D art, in a cost effective way. However, talking to a lot of game developers, we realised something key. For the same reason games like Minecraft and Roblox are very popular, more and more people want the opportunity to customise and contribute 3D content inside their favourite games/metaverses.
This is why we created the Kaedim API. Within your app, enable your players to upload their 2D inspiration and easily create their own 3D content for customising and populating the game.
Kaedim API Demo Video: https://www.youtube.com/watch?v=k976GJWQrKw
Documentation: https://app.archbee.io/public/m370vHO-M7WGXJQLRlIte/AU-DhH6mX0e1sb_FRXH3i#lk-useful-links
For signup and more information about onboardings get in touch with us here
Discord Server: https://discord.gg/4wN8NSUr
Thanks a lot for reading this! We are adding more and more features over time and would love to hear your feedback and ideas on what you’d like to see from the API.
If you have any cool app ideas that can be built by using Kaedim, drop them in the comments!
- lasagna_coder 3 years agoFrom the video I gather that texturing is still a manual step? I'm a little confused how your editor showed the model without a texture, then you were able to do a perfect color fill on the different parts of the model. One of the most difficult parts of modeling is the texturing, (bump/normal map, albedo/lighting, color, etc) with lots of trade offs for how big your texture is, how much can be re-used, not to mention the actual mapping stage, which even the best "smart" auto-mapping tools do just OK.
I'm impressed by the model demo, but a lot of the time comes from determining the style (conceptual design), then implementing that style within the details (which is the baking, painting aspects of creating the model textures). You mention Robolox/Minecraft and the demo uses a kind of low poly metaverse social app, which your demo fits well, so I'm wondering who your target market is, I assume its games/apps with high volume, low detail models at the moment, is this correct?
- konstantina_ps 3 years agoThanks for the comment! Yes, texturing is a manual step using a small widget we've built. We do the fill monochrome for the different parts of the model. It's easy to do because of how the model is generated formed by different parts. Then, as it enters the game environment, it's affected by lighting too.
Yes, that is correct in the majority. The focus is games/apps with high volume, low detail. However, we can also do higher detail/resolution and some of our customers are PC and console studios and they use it for prototyping, blocking out scenes, iteration.
Our website and this video feature some higher detail models: https://youtu.be/jSZ7RMq5EKA
- konstantina_ps 3 years ago
- throwoutway 3 years ago> Transform your 2D art into 3D Content
One suggestion: The example on your home page, to me, looks fairly three-dimensional (as far as 2D sketches go), with shading and everything. I would suggest a better example might help with capturing potential customers.
Looks great though
- konstantina_ps 3 years agoThanks a lot for the feedback and suggestion! Taking it onboard!
- konstantina_ps 3 years ago
- andybak 3 years agoCan we see more examples?
(Including some less successful ones and some failure cases ideally)
- rainboiboi 3 years agoYup, would love to see more examples how this works!
- konstantina_ps 3 years agoThanks both! Here is a video demonstrating the web app version, with 3 more examples (one sketch, one concept art, one photo): https://youtu.be/jSZ7RMq5EKA Let me know what you think!
- konstantina_ps 3 years ago
- rainboiboi 3 years ago
- orliesaurus 3 years agoHere are some inputs and outputs: https://i.imgur.com/52Vneuv.png
- konstantina_ps 3 years agoHaha thanks orliesaurus ;)
- konstantina_ps 3 years ago
- fxtentacle 3 years agoIs that AI-based, e.g. like https://www.arxiv-vanity.com/papers/1511.06702/ ?
BTW, great job improving over the public state of the art.
- konstantina_ps 3 years agoThanks for the comment and kind words! Indeed it is!
- konstantina_ps 3 years ago
- alex_venetidis 3 years agoWhat are the current limitations of the model which you're working on? Could you share some examples where the output is currently sub-optimal and what steps you're taking towards improving it?
- konstantina_ps 3 years agoHi Alex, thanks for the question! Currently, we are not producing great results when it comes to realistic humans, animals, and vegetation. Those requests are not served as we haven't yet trained for these kind of inputs. Once our geometry reconstruction has high accuracy for hard-surface objects we'll move on to this category, collect data, train and improve.
- ekianjo 3 years ago> Once our geometry reconstruction has high accuracy for hard-surface objects we'll move on to this category, collect data, train and improve.
is that really possible though? For complex objects there should be multitude of 3d structure that fit a 2d projection and there's really no way to say "which one is the most correct one".
- konstantina_ps 3 years agoThanks ekianjo! Yes, that is true. For complex objects, the effect of the algorithm is to lose detail. We mitigate this with our multi-view option (up to 6 images of different views). At the moment high level of complexity is in most cases not processed. We do best with simple objects.
- konstantina_ps 3 years ago
- ekianjo 3 years ago
- konstantina_ps 3 years ago
- konstantina_ps 3 years agoNew Discord link: https://discord.gg/ZY6wJwKDCa
- JaafarRammal 3 years agoHow much prior work does the picture uploaded need? Is a simple background enough or further manual processing on the 2D picture has to be done before uploading it (e.g., adding metadata, etc...)?
- andreiKaedim 3 years agoHello JaafarRammal, a simple background is enough as long as the main object is clearly distinguishable, no extra metadata is needed, just the image :)
- andreiKaedim 3 years ago
- monkeydust 3 years agoCan this convert a 2d floorplan in DWG format into a 3d model?
- konstantina_ps 3 years agoThanks for the question! Currently DWG is not a viable input. We once tried it with a photo of a maze-like floor plan and it just created the “walls” if you like.
- monkeydust 3 years agoThat could be interesting. Anyway I could try?
- monkeydust 3 years ago
- konstantina_ps 3 years ago
- mariankh 3 years agoSounds awesome! Can we provide back and front images of the object? How do you achieve realistic 3D representations?
- konstantina_ps 3 years agoThanks for the question mariankh! For the API version, we have it only as single image for the time being. However, if you are using our Web App, we also have an option for uploading up to 6 images :) For realistic 3D representations, we train with a lot of real-life objects and then we also do a Quality Assurance pass to make sure that all our outputs meet our level of standard.
- konstantina_ps 3 years ago
- 4thstreet 3 years agoInteresting! Can you share any insight into how your ML algorithm works?
- konstantina_ps 3 years agoHi 4thstreet, thanks for your question! Yes, we train on 2D-3D pairs. For example, we have 3D models and then we take images of them so we learn how objects look from different points of view :)
- pradeepb30 3 years agoBut doesn't this limit you? I mean the work here to collect the data is such a pain. While glad that you are doing this, is there any other method?
Also, what are your thoughts on systems like Apple's object Capture?
- tracyhenry 3 years agoIt might not be as hard as you think. Get 1,000 3D models, 8 rotations for each model and 1,000 scenes. Suddenly you have 8M training pairs.
- tracyhenry 3 years ago
- pradeepb30 3 years ago
- konstantina_ps 3 years ago
- chrysa_mar 3 years agoSounds awesome!
- konstantina_ps 3 years agoThanks!
- konstantina_ps 3 years ago
- peter_retief 3 years agoCan this be used to generate 3D print files? (stl)
- konstantina_ps 3 years agoThanks Peter! Yes, we can add this download format as well (we currently have obj, fbx, glb, gltf)
- peter_retief 3 years agoThank you!
- peter_retief 3 years ago
- konstantina_ps 3 years ago
- kyriakosel 3 years agoHow does the API work ? is it webhooks ?
- andreiKaedim 3 years agoHi kyriakosel, thanks for your question, yes we use webhooks to send the generated 3D model to our clients, our documentation includes more detail: https://app.archbee.io/public/m370vHO-M7WGXJQLRlIte/AU-DhH6m...
- andreiKaedim 3 years ago
- Raoufyousfi 3 years agoIs it websocket API ?
- andreiKaedim 3 years agoHello Raoufyousfi, thank you for your question, we currently only use REST + webhooks
- andreiKaedim 3 years ago
- gieun 3 years agoSounds Awesome!
- konstantina_ps 3 years agoThanks gieun!
- konstantina_ps 3 years ago
- gieun2 3 years agogreat idea!
- konstantina_ps 3 years ago:)
- konstantina_ps 3 years ago