#### Description of this paper

##### Convex Optimization, Spring 2016 Homework 6 Due on 8:15a.m. May.-(Answered)

Description

Step-by-step Instant Solution

Question

The problem 3?Quantile regression in attached file CVXHW6.pdf

Convex Optimization, Spring 2016

Homework 6

Due on 8:15a.m. May. 26, 2016, before class

Note: Please also submit your Matlab code by printing it out (use the data provided in the xx.dat and

rename them as xx.mat).

1) Distributed lasso. Consider the `1 -regularized least-squares (?lasso?) problem

x1





 
2

A1 0 B1 x1

x2 ? c1
+ ?
x2
,

minimize f (z) = (1/2)

0 A2 B2

c2

y

y

1

2

(1)

with optimization variable z = (x1 , x2 , y) ? Rn1 ? Rn2 ? Rp . We can think of xi as the local variable for

system i, for i = 1, 2; y is the common or coupling variable.

(a) Primal decomposition. Explain how to solve this problem using primal decomposition, using (say)

the subgradient method for the master problem.(10 points)

(b) Dual decomposition. Explain how to solve this problem using dual decomposition, using (say) the

subgradient method for the master problem. Give a condition (on the problem data) that allows you

(k)

to guarantee that the primal variables xi converge to optimal values. (20 points)

2) Solving LPs via alternating projections. Consider an LP in standard form,

minimize

cT x

subject to

Ax = b

x  0,

with variable x ? Rn , and where A ? Rm?n . A tuple (x, ?, ?) ? R2n+m is primal-dual optimal if and

only if

Ax = b, x  0, ?AT ? + ? = c, ?  0, cT x + bT ? = 0.

These are the KKT optimality conditions of the LP. The last constraint, which states that the duality

gap is zero, can be replaced with an equivalent condition, ?T x = 0, which is complementary slackness.

(a) Let z = (x, ?, ?) denote the primal-dual variable. Express the optimality conditions as z ? A ? C,

where A is an affine set, and C is a simple cone. Give A as A = {z|F z = g}, for appropriate F and

g.(10 points)

(b) Explain how to compute the Euclidean projections onto A and also onto C.(10 points)

3) Quantile regression. For ? ? (0, 1), define h? : Rn ? R as

h? (x) = ?1T x+ + (1 ? ?)1T x? ,

where x+ = max{x, 0} and x? = max{?x, 0}, where the maximum is taken elementwise.

(a) Give a simple expression for the proximal operator of h? .(10 points)

(b) The quantile regression problem is

minimize

h? (Ax ? b),

with variable x ? Rn and parameters A ? Rm?n , b ? Rm , and ? ? (0, 1). Explain how to use ADMM

to solve this problem by introducing a new variable (and constraint) z = Ax ? b. Give the details of

each step in ADMM, including how one of the steps can be greatly speeded up after the first step.(20

points)

1

(c) Implement your method on the given data [hw6 3c.mat] (i.e., A and b), for ? ? {0.2, 0.5, 0.8}. For

each of these three values of ?, give the optimal objective value, and plot a histogram of the residual

vector Ax ? b.(20 points)

Hint. You should develop, debug, and test your code on a smaller problem instance, so you can easily

(i.e., quickly) check the results against CVX.

4) Sum-of-norms heuristic for block sparsity. The basic `1 heuristic for finding a sparse vector x in a convex

set C is to minimize kxk1 over C. We are given a partition of x into subvectors: x = (x(1) , ? ? ? , x(k) ), with

x(i) ? Rni . Our goal is to find an x ? C with the fewest number of subvectors x(i) nonzero. Like the

basic sparsity problem, this problem is in general difficult to solve. But a variation on the `1 heuristic can

work well to give a block sparse, if not the block sparsest, vector. The heuristic is to minimize

k

X

kx(i) k

i=1

over x ? C.

(a) What happens if the norms above are `1 norms? Would you expect x to be sparse? Block sparse?

(b) Generate a nonempty polyhedron {y|Ay  b} using the given data[hw6 4b.mat]. We will divide

x into 20 subvectors, each of length 5. Use the heuristic above, with `1 , `2 , and `? norms on the

subvectors, to find block sparse x in your polyhedron.(10 points)

Hints. You may find the functions norms and reshape useful.

5) Rank minimization via the nuclear norm. A simple heuristic for finding a matrix of low (if not minimum)

rank, in a convex set, is to minimize its nuclear norm, i.e., the sum of its singular values, over the convex

set. Test this method on a numerical example, obtained by generating data matrices A0 , ? ? ? , An ? Rp?q

and attempting to minimize the rank of A0 + x1 A1 + ? ? ? + xn An . You can use n = 50, p = 10, q = 10

(data is given by [hw6 5.mat]).(10 points)

6) Robust geometric program with norm-based uncertainty. We consider a geometric program in variables

Rk+ , data matrix B ? Rm?k , and exponent vectors ?i ? Rn and ?ij ? Rn .

x ? Rn+ , with problem data a ? Q

?

n

?

For shorthand, we define x = j=1 xj j for vectors ? ? Rn . Consider the problem

minimize

k

X

a i x ?i

i=1

subject to

k

X

Bij x?ij ? 1 for i = 1, ? ? ? , m.

j=1

Now assume that we have norm-based data uncertainty in our multipliers ai and Bij , that is, there exist

constants ? ? 0 and  ? 0 where all we know is that the data could be ai ? ?i for a vectpr ? ? Rn such

that k?kp ? ?, and similarly for the rows of B. We would like to solve the robust formulation

minimize

sup

k

X

(ai + ?i )x?i

k?kp ?? i=1

subject to

k

X

(Bij + ej )x?ij ? 1 for i = 1, ? ? ? , m and all e ? Rk s.t. kekp ? .

j=1

Here p ? [1, ?] generate the norms for the uncertainty set. Show how to formulate the robust problem

above as a geometric problem.(20 points)

2

Paper#9256611 | Written in 27-Jul-2016

Price : \$22