public void nearest(int i, int d, int [] reach)
{
if (d < 0) return; // no vertex is this close
ArrayQueue q = new ArrayQueue();
reach[i] = 0; // distance from i is zero
q.put(new Integer(i));
while (!q.isEmpty())
{
// remove a labeled vertex from the queue
int w = ((Integer) q.remove()).intValue();
// mark all unreached vertices adjacent from w
// their distance from i is reach[w] + 1
Iterator iw = iterator(w);
while (iw.hasNext())
{// visit an adjacent vertex of w
int u = ((EdgeNode) iw.next()).vertex;
if (reach[u] > d)
{// u is an unreached vertex
reach[u] = reach[w] + 1; // mark with distance
if (reach[u] < d)
q.put(new Integer(u));
}
}
}
}
while loop
takes O(out-degree of w) time for each
w that is removed from the queue. So the complexity
is linear in the number of labeled vertices
and the sum of the out-degrees of vertices that are at most distance
d-1 from vertex i.
That is,, the complexity is
O(number of vertices that get labeled + sum of
out-degrees of labeled vertices excluding those that are distance d).
x[middle],
is an equilibrium point. If it is, we are done.
If
x[middle] > middle, there is no equilibrium point
in the right half of the array.
If
x[middle] < middle, there is no equilibrium point
in the left half of the array.
The code is given below.
This code is similar to that for binary search.
public int equilibriumPoint(int [] x)
{
int left = 0;
int right = x.length - 1;
while (left <= right)
{
int middle = (left + right)/2;
if (x[middle] == middle)
return middle;
if (x[middle] < middle)
left = middle + 1;
else
right = middle - 1;
}
return -1; // no equilibrium point
}
while loop,
right - left + 1 is halved. So the code
must terminate in O(log x.length) iterations.
Each iteration takes O(1) time.
So the overall complexity is O(log a.length).
(3,7).
It is selected for inclusion into the spanning tree.
Next, the edge (1,2) is considered and selected.
The edge (3,4) is considered next and selected.
The edge (4,7) is considered next. This edge
is rejected because its inclusion into the spanning tree results
in a cycle. Continuing in this manner, we obtain
the spanning tree that is given below.
59.
1 The edge
(1,2) (and hence the vertex 2)
is added first. Next, the edge (2,6) is added.
This is followed by the inclusion of the edge (2,3).
Continuing in this manner, we obtain
the same spanning tree as was obtained by Kruskal's method.
The cost of this spanning tree is 59.
p[v][6]
values to construct the desired path. The result is
1,6,3,5,4,2,7.
c(i,k,k) = c(i,k,k-1),
c(k,j,k) = c(k,j,k-1),
and that c(i,j,k-1) values that are not
on row or column k are used only to compute
c(i,j,k). Therefore, following iteration
k of the outermost for
loop,
c(i,j) = c(i,j,k).
In particular, following the last iteration of the
outermost loop,
c(i,j) = c(i,j,n).
hSum(i,j) = sum (from k=i to j) h(k).
For bins(i), i <= n, item i is at the
bottom of a bin.
Let item j, j >= i be at the
top of this bin.
This is possible only if
hSum(i,j) + p(i) + p(j) <= h.
For an optimal packing, the remaining items (items j+1
through n)
must be packed using the fewest
number of bins (i.e., bins(j+1)).
So we get the following dynamic programming recurrence:
bins(i) =
1 + min{bins(j+1) | hsum(i,j) + p(i) + p(j) <= h and i <= j <= n}
q(n+1,j) = 0 for 1 <= j <= m+1q(i,m+1) = 0 for 1 <= i <= nO(n+m) time.
q(i,j) for
i = n, n-1, ... 1, in that order,
using the given equation for q(i,j).
For each i,
compute q(i,j) for
j = m, m-1, ... 1, in that order.
It takes O(mn) time to compute all of these
q(i,j) values.
O(mn).