### wrapper

+ 1-646-513-2712   +61-280363121      +44-1316080294
support@globalwebtutors.com.
##### Analysis of algorithms Assignment Help,  Analysis of algorithms Homework Help

We at Global web tutors provide expert help for Analysis of Algorithms assignment or Analysis of Algorithms home work. Our Analysis of Algorithms online tutors are expert in providing homework help to students at all levels.

Please post your assignment at support@globalwebtutors.com to get the instant Analysis of Algorithms homework help.

Analysis of Algorithms online tutors are available 24/7 to provide assignment help as well as Analysis of Algorithms Help.

Analysis of algorithms

An algorithm is a finite sequence of instructions,in which each has a clear meaning and can be performed with a finite description in simple steps or actions with an effort in a finite length of time.It solve a problems with a sequence of steps.In Computer Science Design and analysis of algorithym is an important for the purpose of algorithm design to solve many problems in an efficient manner.

Problem Development Steps:
=====================

steps of Problem Development are as follow:

• define a problem
• Develop a model
• algorithm specification
• algorithm design
• Checking the correctness of an Algorithm
• Algorithm analysis
• algorithm implementation
• testing of program
• Documentation

the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps, known as time complexity, or volume of memory, known as space complexity.

Need of analysis:
===========

The capability of problem solving is the process of analysis of algorithm with particular time and size requirement.the main focus of analysis of algorithm is to maintain a performance and required time :
we perform analysis on the basis of:

• Worst-case - The maximum number of steps taken on any instance of size a.
• Best-case - The minimum number of steps taken on any instance of size a.
• Average case - An average number of steps taken on any instance of size a.
• Amortized - A sequence of operations applied to the input of size a averaged over time.

Different stratergies of analysis:

*Asymptotic Analysis
*Amortized analysis
*Apriori Analysis

Analysis of algorithms Online experts ensure:
• Help for Essay writing on various Analysis of algorithms topics
• Custom solutions for Analysis of algorithms assignments at Masters & Phd level.
• Help for Doctoral Dissertation in Analysis of algorithms

Analysis of algorithms questions help services by live experts:

• 24/7 Chat, Phone & Email support
• Monthly & cost effective packages for regular customers;
• Live help for Analysis of algorithms online quiz & online tests, Analysis of algorithms exams & midterms;

Get instant help for Analysis of algorithms Report writing, Technical reports on Analysis of algorithms. We have excellent writers for writing Case studies on Analysis of algorithms.

Problem 1

Rank the following functions by order of growth; that is, ﬁnd an arrangement g1(n); g2(n); : : : ; g24(n) of functions satisfying gi(n) = O(gi+1(n)) for every i ∈{1; : : : 23}. Partition your list into equivalence classes such that f(n) and g(n) are in the same class if and only if f(n) = Θ(g(n)). You do not have to prove your answers. n log n n2=3n3=2n1= log n log n 1 (log n) log n2 n 2 log10 n 4 log n n 2 n 2 n+1 log log n n log log nn! 2 2n 2log n log 2 n log(n 2 ) √ 2 log n √ log n n2 n n + n 2 =10 20

Remarks:

• In this class we use log n to denote the logarithm base 2. • Use the Stirling’s formula to ﬁgure out how to rank n!. The Stirling’s formula is: n! = √ 2n (

n

e

)n

(1 + O(

1

n

))

Use also this fact: for any constants b1; b2 > 0: log b1 n = O(n b2 ) and n b2 ≠ O(log b1 n) In words, logarithm of n raised to any power grows slower than any power of n.

Problem 2

Find the tightest possible bound for T(n) that satisﬁes the following recurrence: T(n) = 2T(n=4) + O(1) for n ≥ 4 O(1) for n < 4 Use either the “unrolling the recurrence” technique or the “substitution/induction” technique from the textbook/class.

Problem 3

Given is a large paper with n diﬀerent points with coordinates (x1; y1);(x2; y2); : : : ;(xn; yn). Notice that by folding the paper along a single line we can make some of the points align. For example, if the points are (1,2), (2,1), and (4,3), then if we fold along the line going through the origin at the 45 degree angle, the points (1,2) and (2,1) will align. Design an O(n 2 log n) algorithm that ﬁnds the maximum number of pairs of points that can be aligned.

Problem 1

Given is an array A[0 : : : n  1] containing integers from the set f0; 1; 2; : : : ; n3  1g. Give an O(n) algorithm that sorts the array. Hint: Typically we write numbers in decimal notation. This implies that an integer x needs about log10 x digits (why?). How many digits do we need for every number in f0; 1; 2; : : : ; n 3  1g if we write it in base n?

Problem 2

(a) We are given a sequence of n numbers a0; a1; : : : ; an1. We would like to determine whether there exists an integer x that occurs in the sequence more than n=2 times (i. e., whether the sequence has a majority element). Design an algorithm that runs in time O(n) and argue its correctness and running time estimate. Example: For A = [3; 1; 2] the answer is NO. For A = [3; 1; 3] the answer is YES. (b) We are given a sequence of n a0; a1; : : : ; an1. We would like to determine whether there

exists an integer x that occurs in the sequence more than n=3 times. Design an algorithm that runs in time O(n) and argue its correctness and running time estimate.

Problem 3

Use the Master Theorem to evaluate the following recurrences:  T(n) = 8T( n 4 ) + (n 2 )  T(n) = 3T( n 9 ) + ( p n)  T(n) = 2T( n 2 ) + (1) 1

Algorithms, Homework 3

Problem 1 In a little town there are n houses on the main street. This street runs along the x-axis and the coordinates of the houses are x1; x2; : : : ; xn. You decided to open a candy shop and you want to ﬁnd the most proﬁtable coordinate xbest . The most proﬁtable coordinate will be one that minimizes the sum of the distances to every house. Give an O(n) algorithm that ﬁnds xbest such that distbest := ∑n i=1 |xbest − xi | is as small as possible.

Problem 2

Consider the following “minimize gaps” interval scheduling problem: Given is a global start time S and ﬁnish time F. Given is also set of n intervals where the i-th interval starts at time si and ends at time fi , S ≤ si ≤ fi ≤ F, for i ∈{0; : : : ; n−1}. Find a set of non-overlapping intervals so that the overall time from S to F not covered by the selected intervals is as small as possible (we refer to this time as gap time). Example: S = 0, F = 10, and the intervals are (1; 3);(2; 6);(4; 5);(6; 10);(8; 9). If we

select intervals (1; 3);(4; 5);(6; 10), then the time between 0 and 10 that is not covered by the intervals is (0; 1), (3; 4), and (5; 6) – total time 3. This is the smallest possible overall gap time. Note that we cannot select intervals (2; 6) and (6; 10) because they are overlapping. Consider the following greedy strategies for this problem: 1. Select the earliest ﬁnishing interval and discard overlapping intervals. Keep doing this until all intervals have been eliminated (either selected or discarded).

2. Select the earliest starting interval and discard overlapping intervals. Keep doing this until all intervals have been eliminated (either selected or discarded). 3. Select the pair of non-overlapping intervals that have the smallest gap between them: ﬁnd a pair of intervals i ≠ j such that sj − fi > 0 is the smallest possible. Select both intervals and discard overlapping intervals. Recursively do the same selection process with intervals that ﬁnish before si : the recursive call will have Snew = S and

Fnew = si . Similarly, recursively do the same selection process with intervals that start after fj : now Snew = fj and Fnew = F. If there is no such pair of intervals,

select a single interval that minimizes the gap between S and F (do not make any further recursive calls). None of these strategies works. Find a counterexample for each strategy. In other words, for each strategy, • ﬁnd a set of intervals and S and F so that the strategy does not produce an optimal solution, • highlight intervals selected by the strategy and state the corresponding gap time, and • highlight intervals in an optimal solution (one that minimizes the overall gap time) and state the corresponding gap time.

1Problem 3

Solve the “minimize gaps” interval scheduling problem (see Problem 2) using dynamic programming in time O(n 2 ). It suﬃces to ﬁnd the minimum overall gap time, you do not have to ﬁnd the corresponding intervals. As your argument of correctness, state all three parts of the heart of your solution. In particular, verbally describe the contents of each cell in your dynamic programming array, give a math formula that computes the values in the array using array values computed earlier, and state the return value of your algorithm.

Homework 4

Given is a sequence of integers a1; a2; : : : ; an. Give an O(n 2 ) algorithm that nds the largest possible sum of elements in an increasing subsequence of a1; : : : ; an.

Problem 2

Given is an m  n matrix of integers ai;j , i 2 f1; : : : ; mg; j 2 f1; : : : ; ng. We want to get from cell (1; 1) to cell (m; n) by \moving either east or south" { in other words,

we consider a sequence of positions (r1; c1); : : : ;(rm+n1; cm+n1), where (r1; c1) = (1; 1), (rm+n1; cm+n1) = (m; n), and for every k 2 f1; 2; : : : ; m + n  2g, either rk+1 = rk and ck+1 = ck + 1, or rk+1 = rk + 1 and ck+1 = ck. Give an O(mn) algorithm computing the largest possible sum of elements ai;j visited by such a sequence, i.e.,

∑m+n1 k=1 ark;ck . Hint: Use a 2d dynamic programming array.

Problem 3

Implement the code for the Indivisible Knapsack problem (see slide 6). Also implement the following recursive pseudo code: KNAPSACK-REC(k; v)

1. if k 0 then return 0 2. if v < wk then return KNAPSACK-REC(k  1; v) 3. return maxf KNAPSACK-REC(k  1; v), ck+ KNAPSACK-REC(k  1; v  wk) g Consider the following input instance: ci = wi = 1 for every i 2 f1; 2; : : : ; ng, and W = n. Run both implementations on these input instances where n ranges through f5; 10; 20; 50; 100;

1000; 10000g (a total of 7 inputs). Measure the running time of both implementations and summarize your observations in a short paragraph. Also submit a plot with the running times (use timing functions inside your code and rerun it several times for the same n and report the median of the observed times). If your code runs longer than one hour, you may stop the computation and report the time as \>1 hour". Note: Heart of the solution For problems 1 and 2 (and other problems in the future) you will need to use dynamic programming. To explain how your algorithm works, describe the \heart of the algorithm" (you do not need to include any other explanation). Recall that the heart of the algorithm consists of three parts: Precise verbal description of the meaning of your dynamic programming array A mathematical formula that describes how to compute the value of each cell of the array (using previously computed array values). Return value of the algorithm. Given are two sequences of numbers a1; a2; : : : ; am and b1; b2; : : : ; bn, where m; n  1. Give an O((mn) 2 ) algorithm that ﬁnds their longest increasing common subsequence.

Given is a convex polygon with n vertices (x1; y1);(x2; y2); : : : ;(xn; yn) (the vertices are listed in a clockwise order). A triangulation of a convex polygon is a set of n  3 non-intersecting edges, where each edge connects two non-consecutive vertices (the overall picture consists of n  2 triangles that together form the original polygon). We will deﬁne the length of a triangulation as the sum of the lengths of these n  3 edges. Give an O(n 3 ) algorithm that ﬁnds the minimum possible length of a triangulation of the given polygon. Hint: Use a 2D dynamic programming array. As before, the heart of the solution that consists of three parts (the verbal description, the mathematical formula that computes the described value, and the return value)

Problem 3

An alphabet contains letters A, B, C, D, E, F. The frequencies of the letters are 35%, 20%, 15%, 15%, 8%, and 7%. We know that the Huﬀman algorithm always outputs an optimal preﬁx-free code. However, this code is not always unique (obviously we can, e.g., switch 0’s with 1’s and get a diﬀerent code – but, for some inputs, there are two optimal preﬁx-free codes that are signiﬁcantly diﬀerent). For the purposes of this exercise, we consider two Huﬀman codes to be diﬀerent if there exists a letter for which one of the codes assigns a shorter codeword than the other code.

Trace the Huﬀman algorithm and construct two diﬀerent Huﬀman codes for the above input. Compute the expected number of bits per character (i.e., the expected codeword length)for both codes. Does there exist a preﬁx-free code with smaller expected codeword length? Reasonyour answer in a sentence or two, or provide such a preﬁx-free code.

1

For every problem in this assignment:  include the pseudocode of your algorithm and a short verbal description, brieﬂy argue your algorithm’s correctness and explain its running time,  submit your code.

Problem 1

Recall the Sokoban game that we discussed in class. Given is an m n grid representing a maze: every position is either empty, or contains a wall. Additionally, there is a box at one of the empty locations, a person at another empty location, and one of the empty locations is the target location. The person can move to any of the adjacent empty locations (left, right, up, or down). If there is a box at that location, the person can push the box in the same direction (e.g., if the person is moving left, the box also moves left), provided that the box is not pushed onto a wall location. Implement the O((mn) 2 ) algorithm we discussed in class to decide whether it is possible to move the box to the target location.

Problem 2

Given is a undirected graph and two of its vertices s and t. Give an O(n + m) algorithm that computes the number of shortest paths from s to t. Problem 3

Give an O(m + n) algorithm that ﬁnds the length of the longest path in a directed acyclic graph.

Note: Finding the longest path in a directed graph is hard – this problem is a variant of the so-called Hamiltonian path problem. However, if the input is guaranteed to be acyclic, a very eﬃcient algorithm exists. For Problems 1 and 3 include the pseudocode of your algorithm and a short verbal description. Brie your argue your algorithm's correctness and explain its running time.

Problem 1

Given is a weighted connected undirected graph G = (V; E; w) and a subset of its vertices U. We say that a spanning tree T of G is U-leaved if all vertices in U are leaves in T (a leaf is a vertex that has only one neighbor). There might be other leaves in addition to the vertices in U. Give an O(n 2 ) (or an O(mlog n)) algorithm that nds a minimum cost U-leaved spanning tree of G. If no U-leaved spanning tree of G exists, the algorithm outputs NONE.

Problem 2

Consider the following BFS-inspired algorithm for the single source shortest path problem. Let G = (V; E; w) be a positively weighted undirected graph and let s 2 V be one of its vertices.

BFSshortestPath(G = (V; E; w), s)

1. For v 2 V do

2. Let found[v] = f alse and dist[v] = 1.

3. Let Q[0] = s and found[s] = true and dist[s] = 0.

4. Let beg = 0 and end = 1.

5. While (beg < end) do

6. Let v = Q[beg].

7. For every neighbor u of v do

9. Let found[u] = true and let dist[u] = dist[v] + w(v; u).

10. Let Q[end] = u and increment end by 1.

11. Else

12. If dist[u] > dist[v] + w(v; u) then let dist[u] = dist[v] + w(v; u).

13. Increment beg by 1.

14. Return dist[]. Does the algorithm work? If yes, argue why it works. If not, nd a counterexample and compare the correct answer with the output of the algorithm.

1Problem 3

Let G = (V; E; w) be a weighted directed graph. Give an O(mn) algorithm that detects

whether G contains a negative-weight cycle.

2

Problem 1

Given is an n m array Chessboard containing only 0's (empty squares) and 1's (occupied positions). Moreover, there is an unlimited number of dominos, i.e. 1 2 rectangle pieces. Your task is to decide if it is possible to cover all of the empty squares on the chessboard by nonoverlapping dominos (the dominos cannot cover any of the occupied positions and they cannot \stick out" of the chessboard). Your algorithm should run in time O((mn)2 ). Include a short paragraph describing your algorithm, brie y argue your algorithm's correctness, and explain its running time.

Problem 2

Recall that the Edmonds-Karp algorithm renes the idea of Ford and Fulkerson in the following way: in every iteration, the algorithm chooses the augmenting path that uses the smallest number of edges (if there are more such paths, it chooses one arbitrarily). Find a graph for which in some iteration the Edmonds-Karp algorithm has to choose a path that uses a backward edge. Run the algorithm on your graph { more precisely, for every iteration draw the residual graph and show the augmenting path taken by the algorithm as well as the ow after adding the augmenting path.

Problem 3

The Traveling Salesman Problem (TSP) is dened as follows: given is a complete weighted undirected graph on n vertices (i.e., there is an edge between every pair of vertices) and a number k > 0, does there exist a cycle going through every vertex exactly once with total weight at most k? (The weight of a cycle is the sum of the weights of the edges forming the cycle.) The Hamiltonian Cycle (HC) problem is de ned as follows: given is an undirected graph, does there exist a cycle that goes through every vertex exactly once? Show that HC is polynomially-reducible to TSP, i.e., HC P TSP. In other words, assume that we have a black box that solves TSP (the input to the black box is n, the weights of all the edges, and the number k; the output is YES if there exists such cycle and NO otherwise). We need to:

(a) Let G be an input of HC. Transform it into an input for the black box. (b) After the black box produces an answer (YES or NO), transform it into the answer to

HC with input G. Your solution should describe both steps (a) and (b) and argue why the construction works. The transformations in steps (a) and (b) need to be done in polynomial time. Note: Both TSP and HC problems are known to be NP-complete. This means that we do not know if they can be solved in polynomial time but most people think that no polynomial- time algorithm exists. Nevertheless, we can pretend to have a black box for TSP and see if such black box would help to solve HC (and other problems).

HOMEWORK 1

Compute the SIFT descriptor of each interest point you detected in (1). To compute SIFT for MSER features, ﬁt the smallest ellipse that encloses the MSER region, then unfold the ellipse to a circle, and then extract the SIFT descriptor from the circle. If you ﬁnd that computing SIFT for MSER features is difﬁcult, then you may replace SIFT with HOG for MSER. To compute HOG, ﬁnd the smallest bounding box that encloses the MSER feature, and then extract the HOG descriptor from the box.

3) (10pts) Match the SIFT (or HOG) descriptors of the two images using the Hungarian algorithm, where the cost of matching two SIFTs (HOGs), h1 and h2, can be computed as χ 2 distance: c(h1, h2) = P i (h1i−h2i) 2 h1i+h2i

. For the Hungarian algorithm, compute the matrix of matching costs

for all pairs of detected interest points. Make the assumption that you extracted only 100 interest

points from each of the two images, i.e., that the dimension of the matching cost matrix is 100×100.

4) (15pts + 5pts bonus) For the above three (bonus four) types of interest points display 10 best matches

superimposed onto the original images:

4.1) (5pts) Harris-afﬁne corners — Figure 5,

4.2) (5pts) Hessian-afﬁne corners — Figure 6,

4.3) (5pts) MSER regions — Figure 7,

4.4) (bonus, 5pts) SURF features — Figure 8.

5) (30pts) For the above three (bonus four) types of interest points compute the precision of matching

for 10 best matches selected in (4). To this end, visually inspect the matching pairs. If they correspond

to the same parts of the scene in the two images then the matching is correct. Compute the precision

as the percentage of correct matching pairs of the 10 best matches. Show the four precision results

for the four types of interest points in a table as:

Interest Points Harris-afﬁne Hessian-afﬁne SURF MSER

Precision

TABLE I

PRECISION IN [%] OF 10 BEST MATCHING INTEREST POINTS EXTRACTED FROM THE TWO IMAGES.

1 / 36) (20pts) In addition to the required six (bonus eight) ﬁgures and the table, your report should also

include a detailed description of the code you used:

• If you used an open-source code, then clearly state the source, authors, and ﬁles.

• If you implemented your own code, then submit a printout of your well-commented code.

In your report, all ﬁgures and tables must have captions. Each missing caption will be penalized with 5 points.

Fig. 1. An example of how to show an image pair, and their best matching points.

Suggested open-source code:

• http://www.robots.ox.ac.uk/˜vgg/software/,

• http://www.robots.ox.ac.uk/˜vgg/research/caltech/phog.html,

• http://www.vision.ee.ethz.ch/˜surf/index.html,

• http://www.vlfeat.org/overview/tut.html

• http://www.vlfeat.org/˜vedaldi/code/sift.html

HOMEWORK 2

This homework is about shape retrieval. Given a query shape and a dataset of other shapes, our goal is to rank K shapes from the dataset that are most similar to the query. This can be formulated as follows. A shape can be represented by a descriptor. Then ﬁnding K most similar shapes amounts to ﬁnding K descriptors with smallest distance to the query descriptor. This assignment raises interesting questions how to deﬁne the shape of an object and its descriptor, such that we can make a distinction between very similar objects (apples and device9), and at the same time correctly retrieve all instances of the query’s

class when that class has large intra-class variances. For this assignment, you will use a subset of the popular MPEG-7 silhouette database: http://web.engr.oregonstate.edu/˜sinisa/courses/OSU/CS556/HW/HW2/mpeg.zip The MPEG dataset is illustrated in the ﬁgure below. A subset of MPEG-7 CE Shape-1 Part-B shape classes. There are six shape classes, and 20 instances of each class. Each ﬁle corresponds to one instance of the shape class. In particular, each ﬁle contains Cartesian coordinates of points along the perimeter of the shape. The ﬁles are in the MATLAB ”mat” format, named as Class%d Sample%d.mat. Each ﬁle can

be loaded into MATLAB workspace with load command. This will create a variable x in MATLAB workspace of size N × 2, where N is the number of points along the shape. The MPEG shapes can be easily visualized by executing plot(x(:,1),x(:,2)). Assignment 1) Perform 100 runs of the following experiment: 1.1) Randomly select one query shape from the MPEG dataset, without repeating the query from a previous run. 1.2) Automatically retrieve K shapes from the rest of the MPEG dataset that are most similar to the current query, where K = 1, 2, 4, 6, 8, 10. For estimating similarity: 1.2.1) Compute the shape-context descriptors of the shapes, and their Euclidean and χ 2 distances, 1.2.2) Compute the beam-angle descriptors of the shapes, and their Euclidean and χ 2 distances. 1.3) Visually inspect the correctness of your result, and compute the retrieval error as a percentage of wrong shapes, which do not belong to the same object class as the query, out of K retrieved shapes. 2) Average your error over 100 runs. 1 / 23) Plot how your average shape retrieval error changes as a function of K = 1, 2, 4, 6, 8, 10 for: 3.1) (15pts) Shape-context descriptors, and Euclidean distance 3.2) (15pts) Shape-context descriptors, and χ 2 distance

3.3) (15pts) Beam-angle descriptors, and Euclidean distance

3.4) (15pts) Beam-angle descriptors, and χ

2

distance

4) (5pts) Show in Figure 1a, one example query and your retrieval results for: K = 10, shape-context

descriptors, and χ

2

distance

5) (5pts) Show in Figure 1b, the same example query as in Figure 1a, and your retrieval results for:

K = 10, beam-angle descriptors, and χ

2

distance

6) (5pts) Show in Figure 2a, another example query (different from that in Figure 1a) and your retrieval

results for: K = 10, shape-context descriptors, and χ

2

distance

7) (5pts) Show in Figure 2b, the same example query as in Figure 2a, and your retrieval results for:

K = 10, beam-angle descriptors, and χ

2

distance

8) (20pts) In addition to the required four plots and four ﬁgures, your report should also include a detailed description of the code you used:  If you used an open-source code, then clearly state the source, authors, and ﬁles. If you implemented your own code, then submit a printout of your well-commented code.

In your report, all ﬁgures and plots must have captions, and plot axes must have labels. Each missing caption or label will be penalized with 5 points.

HOMEWORK 3

This homework is about video segmentation. A sequence of video frames can be viewed as forming a space-time (2D+t) volume. As segmentation partitions an image into regions, video segmentation partitions a 2D+t volume into tubes. The tubes are spatially contiguous and temporally cohesive clusters of pixels from a number of consecutive frames, as illustrated in the ﬁgure below. time t−1 t+1 t Video segmentation yields tubes that are spatially contiguous and temporally cohesive clusters of pixels from a number of consecutive frames. The goal of this homework is video segmentation of a sequence of frames that can be downloaded from:

http://web.engr.oregonstate.edu/∼sinisa/courses/OSU/CS556/HW/HW3/garden.zip Implement the Ncuts algorithm to cluster all pixels from all video frames. The resulting Ncut clusters will represent the tubes. For computing the adjacency matrix of all video pixels, you may

associate any suitable descriptor with the pixels, including, e.g., [R, G, B, x, y, t], where R, G, B

are the standard color channels, and x, y are pixel locations in frame t. If the total number of video

pixels is too large to handle, automatically partition the video into smaller parts; then, conduct

video segmentation of each part using the Ncuts; and, ﬁnally, fuse the video segmentations from

all video parts. Alternatively, you are encouraged to implement the out-of-sample Ncuts clustering

for addressing a large number of pixels from all video frames.

Recommended literature:

• J. Shi and J. Malik, “Motion segmentation and tracking using normalized cuts,” in ICCV,

pages 11541160, 1998.

• C. Fowlkes , S. Belongie and J. Malik, “Efﬁcient spatiotemporal grouping using the Nystrom

method,” in CVPR, I-231–238, 2001

1Figure 1: A toy example that illustrates how to report your results. Organize the frames in the raster scan

(from left to right). The top row shows the original frames, and the bottom row shows video segmentation

results. Each tube should be marked with a unique color for better visualization.

1. (10pts) Specify the pixel descriptors that you used,

2. (60pts) Show in Figure 1 (15pts), Figure 2 (15pts), Figure 3 (15pts), and Figure 4 (15pts),

your Ncuts results for the total number of clusters N = 5, 10, 15, 20, respectively. Organize

the frames in each ﬁgure in the raster scan in two rows. The top row should show the original

frames, and the bottom row should show the corresponding Ncuts results. Each tube that you

have extracted should be marked with a unique color for better visualization. A toy example

that illustrates how to report your results is shown in Fig. 1.

3. (10pts) Discuss your approach and performance in terms of:

(a) (5pts) How did you address memory issues when clustering a large number of pixels

from all video frames?

(b) (5pts) What is the optimal N for the given video sequence? Running time?

4. (20pts) In addition to the required four ﬁgures, your report should also include a detailed description of the code you used:

• If you used an open-source code, then clearly state the source, authors, and ﬁles.

• If you implemented your own code, then submit a printout of your well-commented code.

HOMEWORK 4

This homework is about matching interest points between pairs of images. You are given 5 images, which form 10 pairs. For matching these 10 pairs, you will use: (1) Linear Assignment with one-to-one matching constraint, and (2) Quadratic programming with one-to-one matching constraint.

Assignment

1) For all 10 pairs of images, choose a suitable detector and descriptor of interest points, and:

1.1) Use the detector to extract point features in the images

1.2) Associate a descriptor vector to the extracted features

1.3) Deﬁne matching similarity, and compute the matching similarity matrix between all

1.3.1) Pairs of points from the two images Ψ = [ψvv

′ ], where v ∈V , and v

∈V

1.3.2) Quadruple of points from the two images Ψ = [ψuu′

vv

′ ], where u, v ∈V , and u

, v

∈V

1.4) Use [ψvv

′ ] and match the extracted points using the Linear Assignment with one-to-one matching constraint

1.5) Use [ψuu′

vv

′ ] and match the extracted points using the Quadratic programming with one-to-one

matching constraint

1.6) Manually select in each image K points that are recognizable, unique corners or parts of

objects in the scene, which have also been detected by your interest-point detector. Since the

images are different, you may not be able to select the same points in every image; however,

try to select points that can be observed in most images.

1.7) Visually inspect all pairs of images and their point matches. Let N ≤ K denote the number

of matched points from image 1 among those K that you manually selected in image 1, in the

previous step. Similarly, N′ ≤ K denote the number of matched points from image 2 among

those K that you manually selected in image 2, in the previous step. Let M′ ≤ N be the

number of correct matches in image 2 of the N points of image 1. Let M ≤ N′

be the number

of correct matches in image 1 of the N′

points of image 2. Compute recall and precision of

matching, as follows:

Recall =

1

2

(

M

K

+

M′

K

) Precision =

1

2

(

M′

N

+

M

N′

)

2) Compute the average recall and precision of matching across all 10 pairs of images.

IMPORTANT:

In your report, all ﬁgures must have captions. Each missing caption will be penalized with 5 points.

1) (5pts) Specify the interest-point detector that you used

2) (5pts) Specify the point descriptor that you used,

3) (5pts) Specify the matching similarity between two points from two images,

4) (5pts) Specify the matching similarity between two pairs of points (a quadruple) from two images,

5) (30pts) Show your qualitative results in Figure 1 and Figure 2. Select one pair of images from those

10. Display them next to each other, and superimpose their point matches onto the original images.

An example of how to show an image pair, and their matching points is given in the ﬁgure below.

Speciﬁcally:

5.1) (15pts) In Figure 1, show your qualitative matching results when using the Linear Assignment

with one-to-one matching constraint, and

5.2) (15pts) In Figure 2, show your qualitative matching results when using the Quadratic programming with one-to-one matching constraint.

6) (15pts) In Figure 3, show the two plots of your average matching recall for the Linear Assignment

and Quadratic programming, respectively, with one-to-one matching constraint, as a function of

K = 5, 10, 15, 20.

7) (15pts) In Figure 4, show the two plots of your average matching precision for the Linear Assignment

and Quadratic programming, respectively, with one-to-one matching constraint, as a function of

K = 5, 10, 15, 20.

8) (20pts) In addition to the required four ﬁgures, your report should also include a detailed description

of the code you used:

If you used an open-source code, then clearly state the source, authors, and ﬁles.

If you implemented your own code, then submit a printout of your well-commented code.

HOMEWORK 5

This homework is about stereo geometry. You will use the three stereo pairs of images, provided on the class website for this assignment.

Assignment

1) (75pts) For each stereo pair, image 1 and image 2, manually select N ≫8 pairs of corresponding points

) : i = 1, . . ., N, }

(5pts) Estimate the fundamental matrix F and report its elements.

(5pts) Show in Figure 1 two example points in image 1, and their epipolar lines in image 2

(i.e., superimpose the epipolar lines over image 2; use a visible color to depict the epipolar

lines). In the caption of Figure 1, specify the two epipolar lines. Note that if your estimate of

F is wrong then the two points in image 2 that correspond to your examples in image 1 will

not lie on the respective epipolar lines. If your epipolar lines in Figure 1 do not pass through

the corresponding points you will lose 2 points for this task.

1.3) (5pts) Show in Figure 2 two example points in image 2, and their epipolar lines in image 1

(i.e., superimpose the epipolar lines over image 1; use a visible color to depict the epipolar

lines). These example points should be different from those in the previous task. In the caption

of Figure 2, specify the two epipolar lines. Note that if your estimate of F is wrong then the

two points in image 1 that correspond to your examples in image 2 will not lie on the respective

epipolar lines. If your epipolar lines in Figure 2 do not pass through the corresponding points

you will lose 2 points for this task.

1.4) (5pts) Show in Figure 3 clearly marked epipoles of image 1 and image 2. In the caption of

Figure 3, specify the two epipoles.

1.5) (5pts) Estimate and specify the coordinates of reconstructed 3D points in the scene that

correspond to the four example 2D points which you selected in image 1 and image 2 in

the above tasks. Let X1, X2, X3, X4 denote the coordinates of these 3D points. Compute and

specify the ratio

kX1−X2k

2

kX3−X4k

2

. Comment of the quality of your estimates of X1, X2, X3, X4. Does

your result make sense in the scene? That is, did you correctly estimate the scene proportions

between 3D points X1, X2, X3, X4. In the case you get signiﬁcantly wrong estimates of

X1, X2, X3, X4, you will lose 2 points for this task.

2) (15pts) Brieﬂy describe the mathematical formulation of a method you used for estimating:

2.1) (5pts) F,

2.2) (5pts) the epipolar lines and epipoles,

2.3) (5pts) X1, X2, X3, X4

3) (10pts) In addition to the required nine ﬁgures and speciﬁcations, your report should also include

a detailed description of the code you used:

• If you used an open-source code, then clearly state the source, authors, and ﬁles.

• If you implemented your own code, then submit a printout of your well-commented code.

1 / 2You may choose to use any available software package, including

http://www.robots.ox.ac.uk/∼vgg/hzbook/code/

2 / 2

CS556: HOMEWORK 6

due 03/22/2012

This homework is about recognizing people in images using the bag-of-words model. For evaluation,

you will use the Graz-02 images of people, which can be downloaded from:

http://lear.inrialpes.fr/people/marszalek/data/ig02/

The images are divided into training and test sets. Use the training images to learn a representation of

a person, and use the test images to test your approach. Follow the instructions for training and testing.

Training

1) You are given training images that are labeled with the class name (i.e., people), and segmentation

masks where the target objects (i.e., people) are located in the images, as illustrated in the ﬁgure

below.

Fig. 1. An example image from the Graz-02 images of people. The original images are accompanied with ground-truth masks

indicating the image regions occupied by people. Red color marks the visible object parts and green is used for occlusions.

2) Extract point features of your choosing from each image. The features that fall within the masked

area are called positive examples, and the remaining features are negative examples. Note that there

will be more negative than positive examples. You may discard superﬂuous negative examples (e.g.,

by random selection) to keep the numbers of positive and negative samples balanced. Associate

descriptors of your choosing to the extracted features.

3) Use the K-means algorithm to cluster all positive and negative descriptors from all training images,

where K ∈{300, 1000}. The cluster means represent K codewords of the resulting dictionary.

Record cluster membership of each descriptor from each training image, as follows. Each descriptor

is mapped to

3.1) The closest codeword (i.e., one descriptor generates one codeword occurrence in the image).

This mapping is referred to as discrete mapping.

3.2) All K codewords with associated weights, computed as distances of the descriptor to every

codeword, normalized by the maximum distance (i.e., one descriptor generates all K codeword

occurrences in the image). This mapping is referred to as continuous mapping.

4) Characterize every masked and non-masked region in training images by the histogram of codewords

that fall within that region. Note that this has be done for both K = 300 and K = 1000 codewords.

Compute the histogram as follows:

4.1) For discrete mapping, simply count codeword occurrences and normalize their counts. This

will generate histogram x

(1)

i

for image region i.

1 / 34.2) For continuous mapping, compute the weighted counts of all codeword occurrences, and

normalize their counts. This will generate histogram x

(2)

i

for image region i.

5) Form the sets of positive and negative samples {(xi

, yi)} for training a person detector. Positive

samples correspond to the masked image regions (i.e., people occurrences), described by codeword

histograms xi

, and have yi = 1. Negative samples correspond to background, described by codeword

histograms xi

, and have yi = −1.

6) Use the SVM classiﬁer as a person detector. To this end, submit the above sets of positive and negative samples to publicly available software for learning a linear SVM (e.g., WEKA http://www.cs.

waikato.ac.nz/ml/weka/, or libSVM http://www.csie.ntu.edu.tw/˜cjlin/libsvm/). Note that

you will need to train four independent SVM detectors for:

6.1) K = 300, discrete mapping

6.2) K = 1000, discrete mapping

6.3) K = 300, continuous mapping

6.4) K = 1000, continuous mapping

Testing

1) Given test images, your goal is to detect and localize people in these image. For each test image,

do the following:

1.1) Since you do not know a priori the right location and scale of people occurrences in the test

image, you need to analyze a wide range of image locations and scales. To this end, you may

implement a scanning window procedure. Alternatively, you may perform a hierarchical image

segmentation, and thus provide direct access to different image parts. Choose one approach.

The result will be a set of hierarchical image regions (or hierarchical windows).

1.2) Extract from each region (or window) the same interest points of your choice as in training.

Then, associate with each point the same descriptor of your choice as in training.

1.3) Map the extracted features to the dictionary of codewords, using discrete and continuous

mapping, as in training. Note that this has be done for both K = 300 and K = 1000 codewords.

1.4) Compute the histograms of codewords for each region (or window) in the test image, as in

training. Speciﬁcally, for discrete mapping, count codeword occurrences and normalize their

counts, denoted as x

(1)

. For continuous mapping, compute the weighted counts of all codeword

occurrences, and normalize their counts, denoted as x

(2)

. This needs to be done for both

K = 300, and K = 1000.

1.5) Classify each histogram x by the corresponding SVM-based people detector.

1.6) All regions (or windows) that are labeled as positive by the linear SVM represent people detections in the test image. Implement a non-maxima suppression procedure, aimed at removing

implausible solutions (e.g., windows i and j are positive, and i is fully embedded within j).

1.7) For each region (or window) that is classiﬁed as positive and not removed in the above step,

D, you will compute the intersection and union of its area with the closest ground truth mask

of a person occurrence in the test image, G. Then, you will compute a ratio of the intersection

and union areas, ρ =

G∩D

G∪D

. Let t denote a threshold. If ρ > t then your detection is called

True Positive (TP). If ρ ≤ t then your detection is called False Positive (FP).

1.8) Compute the total number of TPs and FPs, for

1.8.1) K = 300

A) Discrete mapping of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}

B) Continuous mapping of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}

2 / 31.8.2) K = 1000

A) Discrete mapping of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}

B) Continuous mapping of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}

2) Estimate the average running time of each step above, on your computer.

3) Compute the average number of TPs and FPs across all test images. Plot the ROC curve of your

average results, and compute its area-under-curve (AUC) for

3.1) K = 300

3.1.1) Discrete mapping of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}

3.1.2) Continuous mapping of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}

3.2) K = 1000

3.2.1) Discrete mapping of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}

3.2.2) Continuous mapping of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}

1) (5pts) Specify the point features that you used,

2) (5pts) Specify the point descriptor that you used,

3) (5pts) Specify the scanning window, or image segmentation algorithm that you used,

4) (10pts) Show in Figure 1 the ROC curve of your average results for K = 300, discrete mapping

of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}. The caption of Figure 1

should specify the corresponding AUC.

5) (10pts) Show in Figure 2 the ROC curve of your average results for K = 1000, discrete mapping

of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}. The caption of Figure 2

should specify the corresponding AUC.

6) (10pts) Show in Figure 3 the ROC curve of your average results for K = 300, continuous mapping

of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}. The caption of Figure 3

should specify the corresponding AUC.

7) (10pts) Show in Figure 4 the ROC curve of your average results for K = 1000, continuous mapping

of features to the codewords, and t ∈{0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65}. The caption of Figure 4

should specify the corresponding AUC.

8) (10pts) Show in Figure 5, four example test images, and your successful people detection and

localization results. Clearly mark the regions (or windows) that you classiﬁed as positive in the

original images.

9) (10pts) Show in Figure 6, four “negative” example test images, where your approach failed to detect

and localize people. Clearly mark the regions (or windows) that you classiﬁed as positive in the

original images, but their overlap with the ground-truth masks of people are less than the threshold

t < .35.

10) (5pts) Discuss your performance in terms of how certain parameters of your approach change the

results. What is the optimal choice for K, mapping, and t?

11) (10pts) In addition to the required six ﬁgures, your report should also include a detailed description

of the code you used:

• If you used an open-source code, then clearly state the source, authors, and ﬁles.

• If you implemented your own code, then submit a printout of your well-commented code.

3 / 3

CSI 4105 Design and Analysis of Algorithms II Winter 2012

Computer Science University of Ottawa

Homework Assignment #2 (100 points, weight 6.67%)

Due: Friday Mar 16, at 11:30 p.m. (in lecture)

1. (25 points) Let NearbySet be the problem de
ned as follows. Given a graph G and a

number k, is there a way to select a set N  V (G) with jNj = k such that every vertex in

the graph is either in N or is connect by an edge to a vertex in N. Show that NearbySet

is NP-complete.

2. (25 marks) Consider the treasure splitting problem: there are n objects 1; 2; : : : ; n each of

value vi

, 1  i  n. Two pirates need to split the treasures evenly. The TreasureSplitting

problem asks: given v1; v2; : : : ; vn is it possible to partition f1; 2; : : : ; ng into two sets S1, S2

(partitioning means S1 [ S2 = f1; 2; : : : ; ng and S1 \ S2 = ;) such that

X

i2S1

vi =

X

j2S2

vj ?

Prove that TreasureSplitting is NP-complete.

3. (25 points) Consider a special case of QSAT (Quanti
ed 3-SAT) in which the formula ‑(x1; : : : ; xn)

has no negated variables. We de
ne the decision problem NNQSAT to be the problem of de-

ciding the truth value of:

9x18x2 : : : 9xn28xn19xn ‑(x1; x2; : : : ; xn);

where n is odd and ‑(x1; x2; : : : ; xn) is a 3-CNF formula with no negated variables. Give a

polynomial time algorithm to solve NNQSAT; analyse the running time of the algorithm.

4. (25 points) De
ne the choice set and describe a backtracking algorithm for the problem: given

G and k,
nd all k-vertex colourings of G.

1

CSI 4105 Design and Analysis of Algorithms II Winter 2012

Computer Science University of Ottawa

Homework Assignment #3 (100 points, weight 6.67%)

Due: Friday Mar 30, at 11:30 p.m. (in lecture)

Internet-free answers required. I have inserted enough hints so that inspiration should not be sought

anywhere else. I warn you that several internet sources for some of these questions are either not

detailed enough or have subtle mistakes. Any answers similar to external sources (other than your

mind or textbook) will be severely penalized.

1. (35 points) Bin Packing Approximation Algorithm

Suppose that we are given a set of n objects, where the size of the ith object satis
es 0 < si 1.

We want to pack the objects into the minimum number of bins of size 1. Each bin can hold

any subset of the objects whose total size does not exceed 1. You know the decision version

of boin-packing is NP-complete (proven in the midterm test 2).

The
rst-
t heuristic takes each object in turn and places it into the
rst bin that can

accommodate it. The following subquestions are steps towards showing that the
rst
t

heuristic is a 2-approximation algorithm for the problem.

Let S =

Pn

1

si

.

(a) (5 marks) Prove that the optimal number of bins required is at least dSe.

(b) (5 marks) Prove that the
rst-
t heuristic leaves at most one bin less than half full.

(c) (7 marks) Prove that the number of bins used by the
rst-
t heuristic is never more than

d2Se.

(d) (8 marks) Prove that the rst-t heuristic is an approximation algorithm with an approximation ratio of 2.

(e) (10 marks) Application. At your co-op term you are asked to minimize the number of trucks to transport a variety of heavy items. Space is not an issue, as items are so heavy that you would exceed weight capacity before the space is full. Each truck has weight capacity 1000Kg, and each of the n items has a weight with Your boss requires that you design a polynomial time algorithm to solve this problem to optimality in 3 weeks, and says that failing to do so would a select his evaluation of your co-op report. After spending

one week trying tond a polynomial time algorithm, you remember your CSI4105 and are thankful that you can argue with your boss that his request is not reasonable. However,you explain not all is lost and you are able to design an approximation algorithm for the problem. The next steps outline your explanation: Give a brief explanation of why the truck problem is NP-hard.

Show how to use the 2-approximation algorithm for bin-packing in order to solve the truck problem.

12. (30 marks) Bin Packing Exact Algorithm

Write a backtracking algorithm to solve bin-packing optimally (see problem in the pre-

vious question); the solution of the problem should yield the minimum number of bins. The

algorithm should use polynomial space; the algorithm is not expected to run in polynomial

time, since the decision version of the problem is NP-complete (this was hopefully proven by

you in the midterm test 2). Justify its polynomial space and analyse its running time

by giving an upper bound on it.

3. (35 points) Competitive Facility Location

Recall that in Competitive Facility Location, we are given a graph G with a non-

negative value bi attached to each node i, and a bound B. Two players alternately select

vertices of G, so that the set of selected vertices at all times forms an independent set (can't

select a neighbour of an already selected node). Player 2 wins if he ultimately selects a set of

vertices of total value at least B; Player 1 wins if he prevents this from happening.

The question is: Given the graph G with values bi attached to each node i and given the

bound B, is there a strategy by which Player 2 can force a win?

Consider the example below.

Two co
ee houses play this game: player 1 is GIMEBUCK\$ and player 2 is BRIDGEFAIR.

In the graph below, the answer is Yes for B = 20. To verify that BRIDGEFAIR is guaranteed

to be able to set up facilities for a total value of at least 20, we need to analyse some cases:

1) Case 1: player 1 GIMEBUCK\$ picks one of vertices 2,3,4,5,6,7,8.

In this case, BRIDGEFAIR wins by picking vertex 1, yielding value 100.

2) Case 2: player 1 GIMEBUCK\$ picks vertex 1.

BRIDGEFAIR can now pick any of vertices 4,5,6,7,8; the winning strategy will be to pick

vertex 6, earning value 10 this round, and forcing GIMEBUCK\$ to pick one of vertices 4 or 8.

In either case, BRIDGEFAIR can then pick the other unpicked vertex (8 or 4, respectively),

and accumulate 10 more in value, for a total value of 20.

Show that Competitive Facility Location is in PSPACE, by giving an algorithm

(pseudo code) that solves the problem in a polynomial amount of space. Justify why

the space used is polynomial.

2

Approximation Algorithms Spring 2012

Homework Assignment 1

Due: Tuesday, Apr. 24 in class.

General remark: whenever you are asked to provide an
-approximation algorithm, you need to

prove that your algorithm produces a feasible
-approximate solution.

1. Greedy algorithms for Job Interval Scheduling. Recall that in the job interval scheduling

problem we have a set J of jobs, and each job j 2 J has a release date rj , a deadline dj and a

processing time pj . Job j is associated with a set Ij of time intervals: the set of all length-pj

intervals contained in [rj ; dj ]. The goal is to schedule maximum possible number of jobs on one

machine. We have shown in class that the following algorithm achieves a factor 2-approximation:

GREEDY

Among all available jobs, choose job j that has an interval I 2 Ij with left-

most right endpoint; schedule j on I and discard all job intervals overlapping

with I.

A mirror re

ection of GREEDY is an algorithm we call GREEDY':

GREEDY'

Among all available jobs, choose job j that has an interval I 2 Ij with right-

most left endpoint; schedule j on I and discard all job intervals overlapping

with I.

Algorithm GREEDYx2 Runs GREEDY and GREEDY' and returns the better of the two solu-

tions.

(a) Show that GREEDYx2 achieves a factor 2-approximation.

(b) Show that your analysis is asymptotically tight.

2. Asymmetric Traveling Salesman Problem. Recall that we have shown in class an O(log n)-

approximation algorithm for the problem using Cycle Covers. Prove that our analysis of this

algorithm is asymptotically tight.

3. Steiner Tree. In the directed Steiner Tree problem, we are given a directed edge-weighted

graph G = (V; E), a root vertex r and a subset T V of vertices called terminals. The goal

is to
nd a minimum-cost subset E0

of edges, such that in the graph induced by E0

, there is a

directed path from r to every terminal in T.

(a) Show an approximation-preserving reduction from Set Cover to directed Steiner Tree.

Hint: Given an instance of Set Cover, construct a 3-layered instance of Steiner Tree. First

layer contains the root r, second layer contains Steiner vertices and third layer contains

terminals. Prove that your reduction is approximation-preserving.

(b) Recall that there is an O(log n)-approximation algorithm for Set Cover, and there is no

factor-c log n approximation algorithm (for some constant c) unless P = NP. What is the

implication of your reduction to the approximability of directed Steiner Tree?

14. Set Cover. Denote by GREEDY the greedy algorithm we de
ned in class for the unweighted

Set Cover problem. Recall that the algorithm chooses at each step a set that covers maximum

number of elements that are not covered yet.

Given a set cover instance I, let sI denote the maximum set size, sI = maxS2I fjSjg. Show that

GREEDY achieves an O(log(sI ))-approximation.

Hint: Partition the algorithm into phases, according to maxS2I fjS \ U

0

jg where U

0

is the set of

elements that are not covered yet.

5. Local Ratio. Given a weighted Set Cover instance I = (U; S), where U is the set of elements and

S is a collection of subsets of U, the frequency of element i is the number of sets in S containing i.

Let fI denote the frequency of the most frequent element. Show an fI -approximation algorithm

for weighted Set Cover using the Local Ratio technique.

Extra Credit. Consider the following algorithm for the Minimum Steiner Tree problem.

Find a minimum-weight spanning tree  of the graph. Root at any terminal t.

While there is a leaf vertex v in the current tree  , such that v is not a terminal, delete v from

.

Return the
nal tree  . Notice that each leaf of is a terminal.

What is the approximation factor that this algorithm achieves? Show asymptotically matching upper and lower bounds. approximation Algorithms Spring 2012

Homework Assignment 2

General remark: whenever you are asked to provide an -approximation algorithm, you need to prove that your algorithm outputs a feasible -approximate solution.

1. Metric k-cluster In the metric k-cluster problem, we are given a complete undirected graph

with non-negative weights we on edges satisfying the triangle inequality and an integer k > 0.

The goal is to partition all vertices into k clusters V1; : : : ; Vk, while minimizing the maximum

distance of any pair of points in a cluster. So we seek to minimize the maximum, over all clusters

of maxu;v2Vi,w(u;v),

Show a factor-2 approximation algorithm for the problem.

(b) Prove that the problem is hard to approximate up to any factor smaller than 2.

Hint: Reduction from 3-coloring.

2. k-dispersion The input to the k-dispersion problem is a complete undirected graph G = (V; E)

with weights we  0 on edges e 2 E, that obey the triangle inequality. Additionally, we are

given an integer k > 0. The goal is to choose a subset S V of k vertices that are as far apart

from each other as possible. Namely, we are trying to maximize mins;s ,02S, w(s;s, 0

Consider the following greedy algorithm: start with S = fvg for an arbitrary node v 2 V ,

and repeatedly add a node furthest from the nodes currently in S, until jSj = k. Show that

this is a 2-approximation algorithm.

Prove that the problem is hard to approximate up to any factor smaller than 2.

3. Multidimensional knapsack and bin packing

(a) In the multidimensional knapsack problem, the input is a set V of n non-negative d-

dimensional vectors. Each vector vi 2 V has a pro
t wi  0. Additionally, we are

given a d-dimensional non-negative vector B (knapsack capacity). The goal is to
nd a

maximum-pro
t subset V

0 V of vectors, whose sum is bounded by B coordinate-wise.

Show a pseudo-polynomial time algorithm for multidimensional knapsack when d is a con-

stant independent of n. What is the running time of your algorithm?

(b) In the multidimensional bin-packing problem, we are given a set V of n non-negative d-

dimensional vectors. The goal is to partition the vectors into minimum number of bins,

such that the sum of vectors in every bin is bounded by 1 in every coordinate.

Show an O(d)-approximation algorithm for multidimensional bin-packing.

4. Euclidean TSP Show that the basic deterministic construction of the quad-tree (the one with-

out the random shift) cannot be used for the PTAS for Euclidean TSP. (Show an example in

which any well-behaved tour has cost at least (1 +
)OPT for someCSCI 3104: Algorithms

Homework 1

Due at 1:00pm on Wednesday, September 12, 2012. Submit your solution electronically

at http://moodle.cs.colorado.edu in PDF format, or turn in the paper version before

class. Make sure to include your name, student id, email address, and the Honor Code

1. In each of the following situations, indicate whether f = O(g), or f =

(g), or both

(in which case f = [1](g)). Brie

y explain why.

(a) f(n) = 10n

5 + 8n

2

, g(n) = 20n

4 + 7n

3 + 300

(b) f(n) = log 8n, g(n) = log(n

2

)

(c) f(n) = n

3

log n, g(n) =

1

5

3

n

(d) f(n) = (

3

2

)

n

, g(n) = 6n

3

2. We introduced in class that when analyzing algorithm complexity, we can ignore the

lower-order terms and the coe
cient of the leading term. For example, 3n + 5 ) n.

Using the formal de
nition of the big-O notation, show that 3n + 5 = O(n) and

n = O(3n + 5), in other words, 3n + 5 = [1](n).

3. The Fibonacci numbers F0; F1; F2; : : :, are de
ned by the rule

F0 = 0; F1 = 1; Fn = Fn1 + Fn2:

Use induction to prove that Fn  2

0:5n

for n  6.

4. Write a python program to compute the Fibonacci numbers F8; F28; F48. What are the

three values? What is the total number of additions needed by your program? Provide

).Combinatorial algorithms: greedy algorithms and charging schemes.,

Polynomial time approximation schemes.,LP-rounding and integrality gaps; algorithms for Set Cover, Congestion Minimization, directed and undirected Multicut; flow-cut gaps.,Metric methods; approximation algorithm for Sparsest Cut., Primal-Dual schema. Algorithms for Set Cover, Steiner Network., Iterative rounding algorithm for Survivable Network Design., Oblivious routing,Hardness of approximation: Set Cover, Asymmetric k-center., SDP-rounding: Max Cut, Sparsest Cut, Expander Flows.

Unique Games conjecture and related problems.,

MATH 415. Analysis II

• Sequences and series of functions of a real variable
• uniform convergence
• power series
• Taylor series
• Fourier series
• topology of n dimensional space
• implicit function theorem
• calculus of the plane
• 3dimensional space
• metric spaces
• Stieltjes integration
• Lebesgue integration

CPR E 511. Design and Analysis of Algorithms

• basic algorithm design
• analysis techniques
• amortized analysis
• randomized algorithms
• Applications to sorting
• graphs
• geometry
• NP completeness
• approximation algorithms

• Divide and Conquer, Sorting and Searching, and Randomized Algorithms
• Introduction; "big-oh" notation and asymptotic analysis.
• Divide-and-conquer basics; the master method for analyzing divide and conquer algorithms.
• The QuickSort algorithm and its analysis; probability review.
• Linear-time selection; graphs, cuts, and the contraction algorithm.
• Graph Search, Shortest Paths, and Data Structures
• Breadth-first and depth-first search; computing strong components; applications.
• Dijkstra's shortest-path algorithm.
• Heaps; balanced binary search trees.
• Hashing; bloom filters.
• Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming
• Two motivating applications; selected review; introduction to greedy algorithms; a scheduling application; Prim's MST algorithm.
• Kruskal's MST algorithm and applications to clustering; advanced union-find (optional).
• Huffman codes; introduction to dynamic programming.
• Advanced dynamic programming: the knapsack problem, sequence alignment, and optimal binary search trees.
• Shortest Paths Revisited, NP-Complete Problems and What To Do About Them
• The Bellman-Ford algorithm; all-pairs shortest paths.
• NP-complete problems and exact algorithms for them.
• Approximation algorithms for NP-complete problems.
• Local search algorithms for NP-complete problems; the wider world of algorithms.
• Union-Find
• Analysis of Algorithms
• Stacks and Queues
• Elementary Sorts
• Mergesort
• Quicksort
• Priority Queues
• Elementary Symbol Tables
• Balanced Search Trees
• Geometric Applications of BSTs
• Hash Tables
• Symbol Table Applications