Candidacy Lecture: Introduction to Reoptimization

 
Unless P=NP, we cannot obtain a polynomial-time algorithm solving hard combinatorial problems. One practical approach in solving this kind of problem is to relax the condition of always finding the optimal solution for an instance and settle for “good enough” solutions. The kind of algorithms which are guaranteed to obtain a solution with a certain quality are called approximative algorithms. However, not all hard problems are approximable, i.e., we can obtain a polynomial-time algorithm that can guarantee the goodness of the solution for a problem.
 
In this lecture, we will present the concept of reoptimization. In this approach, given an instance I of some problem Π, an optimal solution OPT for Π in I, and a modified instance I’ resulting from a local perturbation of I, we wish to use OPT in order to solve Π in I’. With this additional information, reoptimization may help to improve the approximability of the problem or the running time of the solution to it. In fact, we can obtain a polynomial-time approximation scheme (PTAS) for a reoptimization variant of a problem given that the unmodified problem is approximable.

 

Randomized Computation


Randomized Computation
 
from Jhoirene Clemente

 

Lecture on Randomized Computation for our Computational Complexity class.

This lecture includes a short introduction to randomized algorithms.  It includes discussion on probabilistic Turing Machines (PTMs), TMs that make use of random number in its computation, as well as different complexity classes that it recognize. The complexity classes include the Bounded Error Probability Polynomial (BPP), RP, co-RP, and ZPP. Relationship of the said classes where also presented as well as their relationship to known complexity class such as P and NP.

 

Motif Finding on GPUs using Random Projections

CLEMENTE, JHOIRENE. FINDING MOTIFS IN PARALLEL USING RANDOM PROJECTION ON GPUS (Under the direction of HENRY N. ADORNA, Ph.D.) Biological motifs are short patterns that have significant number of occurrences in the set of DNA sequences. These motifs are transcription binding sites that help regulate transcription and therefore gene expression. Detection of these patterns helps in gene function discovery and building regulatory networks. Mutations may occur at random positions of the genome and these patterns are also subject to modifications, making the problem more challenging. A variant called planted (l, d)-motif finding models the detection of these subtle motif patterns in the DNA. However, several algorithms used fail to recover most of the planted (l,d)-motifs. To address this problem, a hybrid algorithm was proposed in the literature which we will refer to as FMURP (Finding Motifs using Random Projection). It uses an initialization method called Projection to avoid being trapped in the local maxima and therefore increases the chance of getting the planted motifs. This algorithm is shown to have a high accuracy on solving motif finding and planted (l,d)-motif finding problem. This research presents a parallel algorithm and implementation of FMURP on Graphics Processing Units(GPUs) using CUDA. It also provides details on the implementation and optimizations done in GPU in order to minimize usage of space. The implementation called CUDA-FMURP was tested on randomly generated (l,d)-motif instances and is shown to have recovered majority of the planted motifs. It is also shown that CUDA-FMURP obtains a maximum speedup of 6.8 using a 512 core GPU with 2.0 compute capability.

Review on the paper entitled “A Fast File System for UNIX”

This is the review for the paper entitled “A Fast File System for Unix” by Marshall Kirk McKusick et. al. of the Computer Systems Research Group of University of California, Berkeley, rewritten in 1984.

 

The paper discussed about their reimplementation of UNIX file system where they adapt the system to a wide range of peripheral and processor characteristics. The new implementation of the system is tested to have ten times faster file access rate compared to the traditional UNIX file system. Several improvements on the file system were also discussed such as advisory locks on files, filename extension across file systems, ability to use long file names, and administrative control of resource usage.

 

The two major contributions  of this paper are the modifications in the file system organization.

 

  1. The first modification is on the storage utilization. The study optimized the storage utilization by increasing the block size. Through this bigger file can be transferred in a single disk transaction, thereby greatly increasing the throughput. However, Unix file system is composed of many small files, therefore large block size increases the space wasted. To resolve this issue. A single block is further partitioned into one or more addressable fragment. Since these fragments are addressable, multiple small files can reside on one data block.

 

  1. The second modification is on the file system reparameterization. This modification is essential to the perform an optimal configuration-dependent block allocations. Each file system used are parameterized and adapted to the type of disk where it is placed. The parameters used are the speed of the processor, the hardware support for mass storage transfers, and the characteristics of the mass storage devices.

 

Although this paper significantly improved the data transfer, the time to read and write the file is almost similar to the reading rate of the older file system. The writing rate of the new file system is 50% slower than the older file system, because the kernel has to do twice as many disk allocations per second.