Date
|
Scheduled
Talk
|
Mar 19, 2008
2:30pm - 3:30pm Goodwin 524
|
Methods for Evaluating Software Architecture: A Survey
Banani Roy
Software architectural evaluation becomes a familiar practice in software engineering community for developing quality software. Architectural evaluation reduces software development effort and costs, and enhances the quality of the software by verifying the addressability of quality requirements and identifying potential risks. There have been several methods and techniques to evaluate software architectures with respect to the desired quality attributes such as maintainability, usability and performance. This paper presents a discussion on different software architectural evaluation methods and techniques using a taxonomy. The taxonomy is used to distinguish architectural evaluation methods based on the artifacts on which the methods are applied and two phases (early and late) of software life cycle. The artifacts include specification of a whole software architecture and its building blocks: software architectural styles or design patterns. The role of this paper discussion is to review different existing well known architectural evaluation methods in order to view the state of the art in software architectural evaluation. This paper also concentrates on summarizing the importance of the different evaluation methods, similarities and difference between them, their applicability, strengths and weaknesses.
|
Mar 11, 2008
2:30pm - 3:30pm Goodwin 524
|
A Comparative Study of Certain Classes of Semantic Representation
Craig Thomas
This summary paper describes the general properties and features of two
systems of semantic representation: Richard Montague's Intensional Logic
and Ray Jackendoff's Conceptual Structures. Each system of representation is
based upon a different type of semantic theory, and thus each one is meant
to express different semantic and linguistic phenomena. The basic concepts
of how each formalism works and their expressiveness will be explored.
Several linguistic phenomena will be explained and used as test cases to
demonstrate the expressiveness and limitations of each semantic formalism.
Additionally, the first order predicate calculus is explained and used to
express linguistic phenomena as an aid for the reader.
|
Nove 23, 2007
12:30pm - 1:30pm Goodwin 521
|
Optimal Channel Assignment in Multi-hop Cellular Networks
Hung Tam
Wireless networks have made great gains in usability and popularity.
However, inherent limitations on cell capacity and coverage still exist.
There are also dead-spots and hotspots problems in these networks. Ad hoc
multi-hop relaying enhances cell capacity and coverage, alleviates the
dead-spots problem, and helps to ease congestion in hotspots. However,
multi-hopping also increases packet delay. Effective channel assignment is
key to reducing delay. Existing channel assignment schemes are heuristics
and may not guarantee optimal solutions in terms of minimum delay. In this
paper, we provide an optimal channel assignment (OCA) scheme for ad hoc
TDD W-CDMA multi-hop cellular networks (MCN) to minimize packet delay. OCA
can also be used as an un-biased tool for the comparison among different
network topologies, network densities, and protocols. To the best of our
knowledge, this is the first time that a minimum delay optimal channel
assignment is proposed in a multi-hop cellular environment.
|
Sept 25, 2007
2:30pm - 3:30pm Goodwin 524
|
Slicing the Three-layer Architecture: A Semantic Foundation for
Behavioural Specification
Michelle Crane
We outline a research proposal whose overall goal is to contribute to
the definition of a formal semantics for UML, and indeed visual
behavioural modelling languages in general. Specifically, we aim to
validate the three-layer semantic architecture, used as a way of
explaining the behavioural semantics of UML. The validation includes
a definition of the semantics of UML actions and activities, as well
as a prototype interpreter.
A Unified Approach to Computing and Visualizing Uncertainty in the Operating Room
Amber Simpson
Measurement uncertainty corrupts data at all stages of computer-assisted surgery: during imaging, 3D model construction, registration, instrument tracking, and rendering. Uncertainty has been ignored by commercial computer-assisted surgery systems. A surgeon who is unaware of this uncertainty can make critical errors (consider inserting screws into the spine). We describe methods to compute and visualize the uncertainty present when guiding an instrument, such as a needle or drill, during computer-assisted surgery.
|
Sept 5, 2007
1:30pm - 2:30pm Goodwin 524
|
Two Birds, One Stone: Selecting Functionally Informative Tag SNPs for Disease Association Studies
Phil Lee
Abstract: Selecting an informative subset of SNPs, generally referred to as tag
SNPs, to genotype and analyze is considered to be an essential step toward effective
disease association studies. However, while the selected informative tag
SNPs may characterize the allele information of a target genomic region, they are
not necessarily the ones directly associated with disease or with functional impairment.
To address this limitation, we present a first integrative SNP selection
system that simultaneously identifies SNPs that are both informative and carry
a deleterious functional effect – which in turn means that they are likely to be
directly associated with disease. We formulate the problem of selecting functionally
informative tag SNPs as a multi-objective optimization problem and present
a heuristic algorithm for addressing it. We also present the system we developed
for assessing the functional significance of SNPs. To evaluate our system, we
compare it to other state-of-the-art SNP selection systems, which conduct both
information-based tag SNP selection and function-based SNP selection, but do
so in two separate consecutive steps. Using 14 datasets, based on disease-related
genes curated by the OMIM database, we show that our system consistently improves
upon current systems.
|
Aug 21, 2007
2:30pm - 3:30pm Goodwin 524
|
An Augmented Reality Tool-Mounted Display for Surgical Guidance
Kevin Kassil
Abstract: In computer-assisted surgery, a computer display provides 3D
guidance of tools during surgery. However, current guidance displays have
a number of drawbacks. This thesis considers the advantages of a prototype
guidance system for drilling tasks. We attached a small LCD screen and
video camera to a surgical drill. The screen's point of view was controlled
by moving the drill. A user study showed that the tool-mounted LCD screen
can provide significantly better positional and angular accuracy than the
conventional display. The study also showed that a video camera might
not be useful for guidance.
|
Aug 9, 2007
2:30pm - 3:30pm Goodwin 524
|
Hamilton Circuits in Hexagonal Grid Graphs
Henry Xiao
Abstract: We look at a variant of the Hamilton circuit problem, where the input is
restricted to hexagonal grid graphs. A hexagonal grid graph has a vertex set that
is a subset of the grid points of a regular hexagonal tiling of the plane and edges
corresponding to hexagon sides. We show that Hamilton circuit in hexagonal grid
graphs is NP-complete.
|
Jun 19, 2007
2:30pm - 3:30pm Goodwin 524
|
An Aspect-Oriented Intrusion Detection Framework
Paul Zhu
Security is a critical issue for software systems, especially for those systems
which are connected to networks and the Internet, since most of them suffer
from various malicious attacks. Intrusion detection is an approach to protect
software against such attacks. However, security vulnerabilities that are
exploited by intruders cut across multiple modules in software systems and
are difficult to address and monitor. Aspect-oriented software development
(AOSD) is a programming paradigm that enables modularization of these kinds
of concerns, called cross-cutting concerns. A number of work have utilized AOSD
to address security issues of software systems, but none of them has employed
AOSD for intrusion detection.
In this thesis, we propose an AOSD framework for developing intrusion-aware
software systems. We identify and specify vulnerabilities and potential attacks
against the target software systems. These attack scenarios and intrusion detection
aspects are modeled using an aspect-oriented Unified Modeling Language (UML)
profile. Based on the aspect-oriented attack scenario model, the intrusion detection
aspects are implemented and woven into the target system via an aspect-oriented
programming Language. The resulting target system is an intrusion-aware system,
which has the ability to detect the intrusions automatically. We present an experimental
evaluation by applying this framework for some of the most common attacks included
in the Web Application Security Consortium (WASC) web security threat classification.
The experimental results demonstrate that the framework is effective in specifying and
implementing intrusion detection and can be applied for a wide range of attacks.
|
Jun 12, 2007
2:30pm - 3:30pm Goodwin 524
|
Using Quantum Mechanics to Enhance Information Processing
Marius Nagy
The weird quantum mechanical effects governing the behavior of sub-atomic
particles are about to revolutionize the way we perform computation and
manipulate information. This thesis is a testimony to the indelible mark that
quantum mechanics has already left on computer science. Specifically, we have
investigated some of the consequences of manipulating information at the
quantum level on data security, parallel processing, universality and
computability.
We have devised an efficient scheme for entanglement verification with direct
applicability to key distribution protocols based on entanglement. We also
showed how an approach exploiting the context of a qubit in the quantum
Fourier transform can be successful in dealing with low levels of
eavesdropping, by propagating the disruption through data dependency.
The importance of parallelism for quantum information processing and its
consequence on universality is demonstrated through a series of evolving
computing paradigms for which only a parallel approach can guarantee a
reliable solution. We also bring a necessary clarification to the much
disputed problem regarding the comparison between the computational power of a
quantum machine and that of a conventional computer.
|
Jun 4, 2007
12:00pm - 1:00pm Goodwin 524
|
Kinetic Visibility
Henry Xiao
In this paper, we survey kinetic visibility problems.
Unlike the research in traditional visibility problems, the
researchers have also taken moving objects into consideration. These
problems have been addressed from different practical aspects, such
as computational graphics and robotic design. Over the years, some
general combinatorial results of the visibility graph have been
shown. Leveraging temporal coherence has been used in kinetic
visibility studies for different scenes. Data structures that have
been developed to support the kinetic operations, and kinetic
algorithms that have been designed to achieve efficient visibility
computation and maintanence are reviewed in this paper. Open problems
related with kinetic visibility will be addressed from different
perspectives. Both theoretical and practical work needs to be done in
future studies.
|
May 3, 2007
2:30pm - 3:30pm Goodwin 521
|
Semi-automatic hybrid segmentation of bone in the Knee
Martin Matusiak
Many orthopaedic procedures performed on the knee require the segmentation
of bones from CT images. Four features of the CT data cause these
difficulties: (i) gaps or weakly defined edges in the cortical bone; (ii) diffused bone
boundaries; (iii) non-uniform texture of the bone across its surface and (iv) narrow
channels between bones. In addition to these, the variability of bone tissue (from spongy
trabecular bone to dense cortical bone) in the knee further contribute to the difficulty
of segmentation of bone. We present a proof of concept study for a hybrid semi-automated
segmentation technique for the knee. The hybrid approach combines the use of active
contours and a knee atlas constructed from MRI data. The active contour method that
we used overcomes the problems of small capture range and inability to converge
into concavitites, which are common with active contour methods. The atlas provides
a good initial approximation of the bone geometry, and can provide anatomical
labelling for a specific patient's anatomy. Our segmentation is performed in three
dimensions (rather than on a per-slice basis, as is typical with CT segmentation), giving
a smoother segmentation, minimizing the user interaction, and reducing segmentation time.
|
March 19, 2007
1:30pm - 2:30pm Goodwin 524
|
A Framework for Modeling Motion of Total Knee Replacement
Elvis Chen
This dissertation presents a framework for the study of passive knee
kinematics. Computational models for solving both the forward and the inverse
kinematics of a prosthetic knee joint were formulated. For forward
kinematics, the model computed the predicted knee motion as a function of
joint parameters; these parameters included patient-specific ligament data as
well as the design and the surgical placement of the prosthesis. The model
also provided three-dimensional graphics visualization of the predicted
kinematics. It was applied to two commercially available prosthetic designs
and was validated experimentally. Simulation results suggested several
heuristic rules for surgical strategies for maximizing the post-operative
knee function.
For the study of inverse kinematics, the computational model provided a means
to estimate the patient-specific ligament data from a sequence of knee motion.
The novel approach of the inverse kinematics model was that it embedded the
forward kinematics model in an unconventional numerical-optimization paradigm.
Given the knee motion and an initial estimate of the ligament data, the
inverse kinematics model recovered a set of joint parameters that, when used
as inputs to the forward kinematics model, resulted in the observed knee
motion. Simulation results suggested that it may be possible to obtain
patient-specific ligament data non-invasively, thus allowing the surgeons to
use post\-operative knee function as a surgical consideration. Together, the
forward and the inverse modeling techniques described in this dissertation
serves as a foundation for kinematic-based total knee replacement surgery.
|
March 8, 2007
1:30pm - 2:30pm Goodwin 524
|
Using Program Mutation for the Assessment of Testing and Formal
Analysis
Jeremy Bradbury
As a result of advances in hardware technology such as
multi-core processors there is an increased need for concurrent software
development. Unfortunately, developing correct concurrent code is more
difficult than developing correct sequential code. This difficulty is
due in part to the many different, possibly unexpected, executions of
the program, and leads to the need for special quality assurance
techniques for concurrent programs such as randomized testing and formal
state space exploration. This talk focuses on the complementary
relationship between such different state-of-the art quality assurance
approaches in an effort to better understand the best bug detection
methods for concurrent software. An approach is presented that assesses
testing and formal analysis tools using metrics to measure the
effectiveness and efficiency of each technique at finding concurrency
bugs. Using program mutation, the assessment method creates a range of
faulty versions of a program and then evaluates the ability of various
testing and formal analysis tools to detect these faults. The approach
is implemented and automated in an experimental mutation analysis
framework (ExMAn) which allows results to be easily reproducible. To
demonstrate the approach, I will present the results of a comparison of
testing using the IBM concurrency testing tool ConTest and the NASA
formal state analysis system Java PathFinder. I will conclude with a
discussion of other potential applications of this research as well as
areas for future work.
|
January 17, 2007
2:30pm - 3:30pm Goodwin 521
|
Molecular Codebreaking and Double Encoding
Cameron MacKay
DNA computing is an unconventional branch of computing that uses DNA
molecules and molecular biology experiments to perform computations.
This thesis evaluates the feasibility of implementing molecular
codebreaking, a DNA computing technique that uses a known-plaintext
attack to recover an encryption key, and double encoding, a proposed
error resistance technique that is designed to reduce the number of
false negatives that occur during DNA computation.
This thesis evaluates molecular biology experiments that make up a
molecular codebreaker, a theoretical DNA computer that has never been
attempted in the laboratory until now. Molecular techniques such as
ligation, gel electrophoresis, polymerase chain reaction (PCR),
graduated PCR, and bead separation were carried out, reported upon and
found to be feasible with certain reservations.
An implementation of the error resistance technique of double encoding,
where bits are encoded twice in a DNA strand, was designed and
attempted. Although the implementation was unsuccessful, several issues
associated with double encoding were identified, such as encoding
adaptation problems, strand generation penalties, strand length
increases, and the possibility that double encoding may not reduce the
number of false negatives.
|
January 15, 2007
11:30am - 12:30pm
Goodwin 736
|
Music Similarity Metrics: Recognizing Tempo, Transposition, Ornamentation, and Accentuation Properties
Nicole Mitchell
The concept of melodic similarity is important in the implementation
of music retrieval systems, musicology studies and interactive music
systems. Music similarity metrics classify and rank music segments
based on similarity of several attributes such as note pitches and
durations. In this thesis, we define four properties: tempo,
transposition, ornamentation and accentuation, for the evaluation of
music similarity metrics. A music similarity metric should be able
to recognize music segments which possess these properties. We
evaluate five existing similarity metrics on their ability to
recognize these four desired properties. These metrics are based on
the use of edit operations to transform one music segment into
another. They assess the cost of an edit operation independent of
the context in which this edit-operation is applied. We find that
these metrics recognize transposition, and some forms of
ornamentation, but do not recognize tempo or accentuation
properties. We propose three new metrics which analyze music
segments based on hierarchy, context and accentuation. We discuss
the performance of these proposed metrics on the four desired
properties. The proposed metrics outperform existing metrics in
that they recognize additional forms of ornamentation, as well as
accentuation. This demonstrates that the use of context is
essential in assessing music similarity.
|