Theory of the Mesh-aware auto-placement calculator
The auto-placement calculator uses a mesh-aware greedy algorithm for calculation. This is one of the most effective and simplest selection methods available for coverage problems. However there are a few problems with a simple greedy algorithm. You need to find a solution by picking one router at a time, evaluating the effectiveness of which router contributes the most to the solution. With a pure greedy algorithm you will always get the same answer, and may get stuck at a local maximum. Also when you have capacity limited problems many routers may be able to work at full capacity so you need a criterion to distinguish between all the "fully utilizable routers".
In the mesh aware calculation, a random path is used to generate many possible solutions. The algorithm works by first taking all the sample points and projecting them onto the local grid. For each grid cell, two scores are computed for any router. A local score that only considers coverage in the area of the local grid point, and a score that considers the total coverage. In general at a local grid point there will be some routers that do a good job locally and those will be preselected for the full calculation. The heart of the calculation is figuring out how much a new router will improve the solution as you build it up. The mesh aware calculation essentially computes an inner-product between a candidate router and the set of as of yet unserverd/partially served points at any grid point. The inner-product is complex as it has to balance the loading between multiple selected routers.
With the mesh-aware model you have a large number of possible paths (factorial the number of grid cells) and the quality of the answer depends on the number of trials you attempt. Each trial consists of a random path with a cooling process. At first you only accept fully loaded routers and lower your acceptance criterion as the trial cools. Each trial takes a lot of computation, which will scale with the size of the problem. So if you can break up the problem into smaller problems then you can effectively see many more combinations, for example, Creating Cluster Bounds is a way to do this.
A simple way to explain how this works is, if you can break a box of size 100 into two boxes of size 50. The same effort to run M trials on the large box allows you to run N trials on each of the smaller boxes. So instead of evaluating only M trials in the big box you are effectively seeing (1/4)*M**2 trials (or M-squared over 4). So if you can break the problem down into many decoupled pieces you can get a much better answer by examining many small problems. However in the real world you will have some pieces that are decoupled and independent of the rest of the problem, but usually you can divide the world down into "lightly-coupled" pieces. With the mesh solution you can generate Service Area Boundary BNA files, or build them hand. This will link take you to an article about creating SAB files in the form of Cluster Boundaries:Cluster bounds in Mesh What the solution does is separate the samples by which polygon(BNA) they are in. A scheduling algorithm is used to schedule the solutions of the individual polygons in such a way that the first polygons solved may also help provide capacity to some of the later scheduled polygons. So the solution should be improved by breaking it down into separate polygons, solving them one at a time, but using the results of previously solved parts of the problem to influence the later solved polygons. Separating out any decoupled samples is good, and making many equally sized regions is also good to do.
Still need help?
Contact us through our support portal!