Spatial Clustering With Equal Sizes

By | November 4, 2013

Cluster Map

This is a problem I have encountered many times where the goal is to take a sample of spatial locations and apply constraints to the algorithm.  In addition to providing a pre-determined number of K clusters a fixed size of m_k elements needs to be held constant within each cluster. An application of this algorithm is when one needs to geographically stratify and pre-allocate the sample frame but keep the sizes the same (or constant) to facilitate operational fielding of a study.

I have done a cursory look for other approaches to this problem but have come up fairly empty. I would certainly be interested in other approaches that are used. However, in general, this is somewhat counter to the textbook teaching of k-means clustering where cluster sizes naturally form based on the specified criteria.

This is one of several approaches to determine the optimal clustering when dealing with spatial data. Other cluster assignment approaches could be used. One in particular is the CLARANS algorithm, but like other clustering approaches it does not constrain the sizes of the clusters. Ultimately the goal here is to keep the clusters the same size and to reduce the total spatial distance from the center of the cluster.

I created a random dataset with just under 10000 randomly selected geographic coordinates (removing Alaska and Hawaii) in the 48 states. Based on the latitude and longitude the locations can be clustered and the sizes constrained. In this example I use exactly equal sized clusters (except when n is not divisible by K), m_k. However, the option exists where one could pre-allocated the cluster sizes so they are fixed in advance but are different from cluster to cluster and then constrained to those sizes if desired.

Latitude Longitude Cluster
37.46644 -113.412 1
40.24648 -74.7457 2
31.89746 -85.5054 3
41.08111 -85.3031 4


As for the distance function I take a couple things into account. First, the earth does not have a constant radius. Because the earth is spinning around so fast it flattens a bit. So I built into the distance function those two radii. This way when traveling over a greater distance of latitude the correct distance can be calculated based on the flattening earth. Second, because the earth is mostly round the Pythagorean theorem doesn’t exactly apply and a more accurate distance around a curved earth is needed. Consequently, the flattening of the earth as well as the curvature of the earth is combined as a more complex formula that is used in the function.

I’m still working on fine tuning and making the algorithm better but my initial algorithm is as follows:

1) set equal cluster size, e.g. n/k, or assign specified sizes.
2) initialize cluster assignment. I’m still working on a better approach but for now I just randomly select, order and systematically assign it through all observations.
3) calculate the center of the clusters.
4) take the first observation and assign it to the closest cluster.
5) since one cluster now has m_k+1 and another has m_k-1 establish a trade to even out the sizes. The closest observation to the giving cluster is then traded.
6) this process continues through all locations.
7) the sum of the distance from each observation to its assigned centroid is calculated.
8) if the next iteration doesn’t decrease that distance (within the tolerance threshold) then stop.
9) continue the process with step 3 until the maximum iteration is meet.

The following code is what I used for my prototype and is not strictly optimized and may take several minutes (~15) on datasets with many thousands of observations. I’ll provide optimized R, Python, and maybe some PHP code at a later time.  I’ve included a verbose version of the R code where it will provide convergence information as well as a timer indicating how long it will take before the maximum iteration is met.


# Convert to radian
as_radians = function(theta=0){
return(theta * pi / 180)

calc_dist = function(fr, to) {
lat1 = as_radians(fr$lat)
lon1 = as_radians(fr$lon)
lat2 = as_radians(to$lat)
lon2 = as_radians(to$lon)
a = 3963.191;
b = 3949.903;
numerator = ( a^2 * cos(lat2) )^2 + ( b^2 * sin(lat2) ) ^2
denominator = ( a * cos(lat2) )^2 + ( b * sin(lat2) )^2
radiusofearth = sqrt(numerator/denominator) #Accounts for the ellipticity of the earth.
d = radiusofearth * acos( sin(lat1) * sin(lat2) + cos(lat1)*cos(lat2)*cos(lon2 - lon1) )
d.return = list(distance_miles=d)

raw.og = read.csv("", header=T, sep="\t") = raw.og[,1:3]

dirichletClusters_constrained = function(, k=5, max.iter =50, tolerance = 1, plot.iter=TRUE) {
fr = to = NULL

r.k.start = sample(seq(1:k))
n = nrow( )
k.size = ceiling(n/k)
initial.clusters = rep(r.k.start, k.size)

exclude.k = length(initial.clusters) - n%%length(initial.clusters)
} else {
exclude.k = 0
}$cluster = initial.clusters[1:(length(initial.clusters)-exclude.k)]$cluster_original =$cluster

## Calc centers and merge
mu = cbind( by($Latitude,$cluster, mean), by($Longitude,$cluster, mean), seq(1:k) )
tmp1 = matrix( match($cluster, mu[,3]) ) = cbind(as.matrix(, mu[tmp1,])[,c(1:2,4:6)]

## Calc initial distance from centers
fr$lat =[,3]; fr$lon =[,4]
to$lat =[,1]; to$lon =[,2]$ = calc_dist(fr, to)$distance_miles$distance.from.center_original =$

## Set some initial configuration values
is.converged = FALSE
iteration = 0
error.old = Inf
error.curr = Inf

while ( !is.converged && iteration < max.iter ) { # Iterate until threshold or maximum iterations

plot($Longitude,$Latitude,$cluster, pch=16, cex=.6,
iteration = iteration + 1
start.time = as.numeric(Sys.time())
cat("Iteration ", iteration,sep="")
for( i in 1:n ) {
# Iterate over each observation and measure the distance each observation' from its mean center
# Produces an exchange. It takes the observation closest to it's mean and in return it gives the observation
# closest to the giver, k, mean
fr = to = distances = NULL
for( j in 1:k ){
# Determine the distance from each k group
fr$lat =$Latitude[i]; fr$lon =$Longitude[i]
to$lat = mu[j,1]; to$lon = mu[j,2]
distances[j] = as.numeric( calc_dist(fr, to) )

# Which k cluster is the observation closest.
which.min.distance = which(distances==min(distances), arr.ind=TRUE)
previous.cluster =$cluster[i]$cluster[i] = which.min.distance # Replace cluster with closest cluster

# Trade an observation that is closest to the giving cluster
if(previous.cluster != which.min.distance){ =[$cluster==which.min.distance,]

fr$lat = mu[previous.cluster,1]; fr$lon = mu[previous.cluster,2]
to$lat =$Latitude; to$lon =$Longitude$tmp.dist = calc_dist(fr, to)$distance_miles = which($tmp.dist==min($tmp.dist), arr.ind=TRUE)
LocationID =$LocationID[]$cluster[$LocationID == LocationID] = previous.cluster


# Calculate new cluster means
mu = cbind( by($Latitude,$cluster, mean), by($Longitude,$cluster, mean), seq(1:k) )
tmp1 = matrix( match($cluster, mu[,3]) ) = cbind(as.matrix(, mu[tmp1,])[,c(1:2,4:6)]
mu = cbind( by($Latitude,$cluster, mean), by($Longitude,$cluster, mean), seq(1:k) )

## Calc initial distance from centers
fr$lat =[,3]; fr$lon =[,4]
to$lat =[,1]; to$lon =[,2]$ = calc_dist(fr, to)$distance_miles

# Test for convergence. Is the previous distance within the threshold of the current total distance from center
error.curr = sum($

error.diff = abs( error.old - error.curr )
error.old = error.curr
if( !is.nan( error.diff ) && error.diff < tolerance ) {
is.converged = TRUE

# Set a time to see how long the process will take is going through all iterations
stop.time = as.numeric(Sys.time())
hour.diff = (((stop.time - start.time) * (max.iter - iteration))/60)/60
cat("\n Error ",error.diff," Hours remain from iterations ",hour.diff,"\n")

# Write out iterations. Can later be used as a starting point if iterations need to pause
write.table(, paste("C:\\optimize_iteration_",iteration,"_data.csv", sep=""), sep=",", row.names=F)

centers = data.frame(mu)
ret.val = list("centers" = centers, "cluster" = factor($cluster), "LocationID" =$LocationID,
"Latitude" =$Latitude, "Longitude" =$Longitude,
"k" = k, "iterations" = iteration, "error.diff" = error.diff)


# Constrained clustering
cl_constrain = dirichletClusters_constrained(, k=4, max.iter=5, tolerance=.0001, plot.iter=TRUE)
table( cl_constrain$cluster )
plot(cl_constrain$Longitude, cl_constrain$Latitude, col=cl_constrain$cluster, pch=16, cex=.6,

map("state", add=T)
points(cl_constrain$centers[,c(2,1)], pch=4, cex=2, col='orange', lwd=4)

Category: Uncategorized

9 thoughts on “Spatial Clustering With Equal Sizes

  1. Pingback: Spatial Clustering With Equal Sizes | R for Jou...

  2. anspiess

    Interesting post! With a little bit of work I think one could modify the kmeans.c function in the R source /library/stats/src folder not only to minimize the squared distances from the centroids but to also use equal group sizes as an additional constraint into the minimization. If you know someone proficient in C, maybe this may be worth trying…

    Cheers, Andrej.

  3. Paulo van Breugel

    For similar functionality (and somewhat approach I think), you may want to have a look at the spcosa R package, and more specifically the function stratify. Did you do any further work on this? It would be interesting to have a python implementation that could be used in GRASS or QGIS.

  4. Pingback: Spatial Clustering With Equal Sizes | Oppimisti...

  5. Pingback: Comment on Spatial Clustering With Equal Sizes ...

  6. Philippe Perie


    Nice code !
    I’m actually using it to minimize travelling costs while ensuring geographical spread for a face to face Survey, the number of clusters is the number of sampling points.
    Maybe a simple suggestion to speed the process without having to play with the intricacies of C : since there is only 2 latitude and longitude, we can just sort the file by latitude or longitude or a function of both to help the algorithm with the inital cluster assignments (closer to the final solution, less iterations)

  7. rania

    Many thanks for this nicely explained algorithm and code!
    I am testing it for a different application related to location privacy.
    I would like to ask if there is any publication such as book or paper where this approach is published.

  8. Brent Edwards

    Very interesting! Did you ever get around to creating a Python version of this code?


Leave a Reply

Your email address will not be published. Required fields are marked *