Frequently Asked Questions

This is a selection of our most frequently asked Questions (FAQ's). Please get in touch with hello@augmented.city if you need additional feedback.

Will the app work without GPS signal?

It should, but to speed up getting answer the system uses GPS for roughly filtering possible answers. E.g. without GPS the system starts localization form larger surrounding estimation that can cause delay in answer.

Besides, the recognition quality depends on GPS presence as 96% against 86% (with CNN, and 65% with no CNN usage).

The app does not work in the evening / night times

The system is designed to work in day, evening and night day times. If the user has scanned only in daytime, then the system may not work in the night. For this we suggest to scan the night time as well.

Requirements to the scene - how many objects, how close, glass covered buildings, etc.

  1. The scene should have at least one object of interest, where the client is going to add the sticker with its additional information about. For the mapping the scene must have some significant buildings facades, statues or other types of artificial objects.

  2. The minimal distance to the object is 2m for outdoors scenes, it’s inefficient to scan on closer distances as it requires smaller lateral steps.

  3. During scanning the building with glass the reflection shouldn’t be scanned or it should be decreased at maximum. The scanner should take images with more visible frames of glasses and should avoid scanning only reflections from glasses.

Precision

We claim it’s about 10-15 cm, but with one of our scene we’ve got mean localization error of 35 cm. So we’re to define which scene we can reach 10-15 cm to keep it in mind.

Time/complexity of map creation

  1. Our application AC Scanner should help a lot in mapping to make the process as easy as it can be. Our wizard step-1 will help a lot with a turn around more effectively. The next steps of the wizard will help with passages and choosing the way to scan a scene.

  2. Currently each point takes 25 seconds (if worse - vertical orientation), 5 seconds to move on the next point, then 100 meters of a street part takes 20-30 points of one passage at about 10-15 mins, if the street needs only 2 passages, then such a part takes ~0.5h to map 100m of street. For wider streets it takes ~1h by having 4 passages.

Is it possible to use 3rd party images (google panorama, city mapping systems, other cameras on cars)

Google panorama are prohibited to use!

Our current state of technology may use other frames from the camera with small distortion, e.g. by now we don’t support fisheye lens cameras for mapping.

What about frames made from cars - we’re to estimate how good such frames can be used for localization from sidewalk areas.

Who owns the 3D cloud created

According to the city contract there are several possible contracts:

  • Customer makes map and owns it on private server;

  • Customer makes map in our cloud and we own it, he receives royalty from 3-rd party use of it;

  • We make scenes and own them.

Reconstruction on phone or on server?

Currently reconstruction is performed on a server completely. We’re going to move its part of mobile to speed it up.

Localization on phone or on server?

Global localization is done on our server via our SDK, local localization is done on mobile by using ARKit\ARCore libraries.

Where do we hold our servers?

Currently they’re in SPB. We’re going to distribute our server among countries or continents: one in Italy/Bari, another - China\Shenzhen, third one - USA, and etc.

Unity?

We’re going to prepare our SDK for unity developers in the summer 2020.

Processing time for reconstruction/localization

Reconstruction - 100 meters pass of street part takes 1 hour, we’re going to speed it up to 5-10 mins.

Localization - currently it takes 1s, we’re going to get 0.3s this summer. And later we’ll move localization on mobiles by getting real time behaviour.

How many stickers we can show on image

Currently it’s not limited and therefore we have a haos by getting them too much. So we’re developing a new designed Tourist app with limitation to show up to 5 info markers of nearby stickers and others will be clustered in groups.

Do we use neural networks?

Currently not, the system doesn’t use CNN at all.

But we’re going to use it in 2 main directions:

  1. Filter out moving objects which were visible during mapping, to speed it up and make more reliable our sparse clouds.

  2. When we don’t have GPS or in the indoors case we may use CNN to make faster localization (see p.1).

Possible problems you might encounter

Minimal camera requirements

At least 2 megapixels resolution with low distortion and minimal FOV of 50 degrees.

What mobile phones we do not support / or experience problems with?

First Tourist app will work on mobiles, which support AR technology on their platforms, e.g. ARKit\ARCore. They have official sites, where we can see a list.

To support old devices, incompatible with AR* cameras, we’re going to issue T2 with old camera support camx (for android) and iCam (iOS).

But Apple doesn’t allow apps using iOS < 13th version that means it can be a problem to issue T2 for old devices soon.

For Android we’re going to issue T2 based on camx camera library, which will work, e.g. on Samsung S10 and some Chin providers. Our Restorator is already based on camx and therefore supports wide models than previous versions of YaPlace.Dev and Tourist apps, which were based on cam2, which is obsolete.

Problems scanning the room / lack of features

In such cases we need more solid coverage by images, so it takes an additional time for mapping. The more rich scene with features, the less time do we need to map it.

Anyway scene may don’t have features at all. Then CV doesn’t work, and AR* approaches won’t too.

Log in information

Both AC Scanner and AC Objects together with the Admin tool should ask the user for login information to bind the user with a 3D cloud created by him. Currently we ask for email and this information will be transferred and stored on the server.

For the future we should find a way via registration on site to get a key on mobile and by clicking on it, the user will get a key for access to the server. It will be designed and developed in summer 2020 (we need our site developer for this).

View AR / Ar query in YaPlace Developer - Do we want to show it ?

We’re going to stop delivering the YP.Dev app. We’ll put into markets 3 apps instead: AC Scanner, AC Objects, AC Tourist.

An additional AC Viewer will be ready for different demos with 3D stickers, it should be accessible for our SDK clients as their business may not be connected with tourism applications.

Optimal phone orientation for reconstruction - horizontal is prefered? always or indoors only?

Yes, the landscape is more preferable always as it requires less frames per point during scanning. But if some building doesn’t fit into this format then the vertical orientation should be used, despite more frames are needed at each point. The goal is to minimize sky and ground on captured frames and get more facades of buildings.

Sticker is invisible - localization in another cloud (not merged clouds)

The merging should be done automatically. But by now it’s ready manual and very slow. So we’re to speed it up significantly and then we’ll apply it on all our clouds to merge them together. I.e. in Bari we’ll do this soon to make visible stickers from reconstructions nearby.

Last updated