The
term “high end computer” will have varying definitions. Most of the creations which fall under that
label are between the early traditional or simple pc and very large
supercomputers. In fact, as personal
computers became more popular and readily available, the prices also went
down. Then generic or cheaper
alternatives were designed as ones with increased performance and capacity
began to hit the market. The blending of
the affordable pc with the superior supercomputer provide a machine where the
possibilities for what it can do are nearly endless and it can fit in one room
of your home. If a uniform high end
computer can be designed, it will involve the computer being expensive and
having elite processing capability. The
cost and performance will go hand in hand as processors, memory, graphics cards
and hard drive will play directly into both.
To further elaborate, we are talking about processors with multiple cores with speed measured in gigahertz, synchronous dynamic memory with large amounts
of gigabytes dedicated to active services and applications, very high capacity
graphics cards and solid state drives that can hold terabytes of data [1]. The trend in technology is the new always
seems to fade out the old. Usually the
intricate differences are so subtle that only experts of the field can explain
what truly separates them. The latest
technological devices come at a higher cost than the previous iterations and
the enhanced performance is just part of the package. These computers are commonly associated with
research experiments, gaming and software development. One more industry which uses high end
computers is Computational Science. Computation
Science is a budding field that draws a line to not be confused with Computer
Science. From an article posted in the
SpringerPlus journal, the author defines it as “being the application and use
of computational knowledge and skills (including programming) in order to aid
in the understanding of and make new discoveries in traditional sciences, as
opposed to the study of computers and computing per se” [2]. That classification describes how computers
can be used in a variety of ways to assist with STEMM projects. STEMM stands for science, technology,
engineering, mathematics and medicine. Aside
from the somewhat cross referencing of technology, computers are primarily used
to process large quantities of information, break down and solve complex
equations and store relevant records. Where
computer science focuses on the components of hardware and software
computational science focuses on the uses for other fields. The interweaving of high end computing and
computational science enables a very formidable tandem which has advances
computing many areas.
One of the primary reasons for these
collaborators, as previously stated, is research. When research needs to produce results it can
be in the form of data visualization. After
gathering so much information, reporting has to be condensed to the facets of
the study. The numbers are broken down
in divisions of categorical distinctions.
From the vast total the research can generate representative parts of
the sum. Data visualization is the
process of converting the information into models that can be interpreted by
other individuals who are not certified experts of the field. A component to Computational Science can be
the talent of reproducing graduate level research to be received at a novice’s
perspective. Though they will certainly
be able to understand it as well, it is intended for a larger group. Data visualization has been known to help both
very effectively. The following excerpt
from a high end computing publication give some hints into the procedure. “The scientific method cycle of today’s
terascale simulation applications consists of building experimental apparatus
(terascale simulation model), collecting data by running experiments (terascale
output of the simulation model), looking at the data (traditional
visualization), and finally performing data analyses and analysis-driven
visualization to discover, build, or test a new view of scientific reality
(scientific discovery) [3].” The
computers assist on each integral step. The
designing of the model and how it will operate is just as important as formally
collecting the data. Computable models
are the models that are of prime interest in computational science. A computable
model is a computable function, as defined in computability theory, whose
result can be compared to data from observations [4]. The model design can be
about deciding the actual form of the input and output. Input can be as simple as inserting numbers into
preset forms or multifaceted like providing hard files and having the data
parsed. Then telling the computer what
to do with all that evidence is the computational side of the output. For data visualization, output will be a
chart, graph or map. Researchers can
present the details of why the model’s data is isolated to create the image it
does. Some of the variations include bar charts, histograms, scatter plot
chart, network model, streamgraphs, tree maps, gantt charts and heat maps [5]. Would
there be a loss in quality or relatability if a histogram was chosen over a gantt chart
for example? Discuss the details of what
validates the choosing of evocative data sets can be very interesting niche for
this study. All of these, however, use the power aptitudes of high end
computers to create high resolution images with rich colors and niceties as the
result for computational science. These
images easily translate pages and pages of raw data to a much more digestible
format. Research that may have taken
years and thousands of contributors can be reduced to a two dimensional
appearance viewable from one page.
Three more examples for applications
of this combination is parallel computing, grid computing and distributed
computing. From a computation science
perspective, grid computing generates individual reports from locations as the
information is fed into a primary resource.
Distributed computing would rather produce a comprehensive report using
multiple network node to collect data. Lastly,
parallel computing uses one main source to assist all the others nodes of a
network. Grid computing and distributed
computing function in a very similar as the network architecture is almost
identical but the end result as well as root cause can be different. A perfect illustration of this concept is the
BOINC program at the University of California, Berkeley. BOINC is an acronym for Berkeley Open
Infrastructure for Network Computing [6].
Its initial release was in the year 2002. Since then it has cycled through many
participants who readily volunteer their computers and services for the
research cause of their choice. And
there are many to choose from. A recent
check of the website displayed over 264,000 active volunteers using over
947,000 machines spread out of almost 40 projects.
Projects vary in topics from interstellar research and identifying alien
lifeforms to what could be the next steps in advanced medicine. Each one has volunteers dedicate time and
resources to what they have interest in.
Each person can go through the process of completing an agreement form and download
software to begin to be a part of it. They
use several high end computer and maybe not-so high end computers across the
network to receive data at an alarmingly fast rate. Systems of this type are measured in floating
point operations per second or FLOPs. That
unit of measure would most likely fall under the category of a performance
metric [7]. High end computing can be
measured by the application, the machine or the combined integration
configuring performance metrics.
Instruction sets of the application processed by available sockets and
cores of the system during a given clock cycle can lead to a number for how
many floating-point operations are conducted.
To be precise the FLOPs amount calculate by the BOINC structure are
converted to petaFLOPs. The peta- prefix
denotes a calculation of 1015.
That quantity is possible from the immense shared space and very little
idle time or mechanical malfunctions. That
is an enormous number. To further
explain, imagine speed you would have to move at to do a task a million times
in one billionth of a second. When the data
collection is time sensitive errors can arise from user authentication and
system authorization. People gaining
access to the network and the network having access to the computer can be
viewed as human error if not completed correctly.
Another area to ensure is security to prevent interception or
modification of information as it is transmitted. Data needs to be protected as it is passed
from node to node. In many of the
projects the retrieved data is geo-specific, any misrepresentation or
alteration of the records can seriously corrupt the reporting of the final
statistics. Encryption and decryption
can play a major role the security of the project. There are countless methods to do this but it
will most likely ensure a way for the message to be encoded leaving the home
location and decoded only once it reaches its intended destination. Accuracy is critical to enterprise level
operations of this field.
Computational Science and High End
Computing are the proverbial match made in heaven. Together there is a give and give sort of
relationship to where the prospects of each are enhanced by the other. They are not completely inseparable
however. Gaming is a major industry for
high end computing. The frame rates of
today’s video games are only possible with certain machine and graphic
cards. The minimum requirements for
games and application are sometimes spoken about beforehand. Computations can be done by human beings
given sufficient amount of time. Computers
have not nearly been around as long as science and math. People of these fields have performed in them
for centuries. The majority of
advancements that reach the mainstream begin from a human proposed thesis and
assisted with, not dependent upon, technology. But the merger is what allows for the
maximizing of effectiveness and time spent.
The combination has expedited and enhanced research in several
fields. It has also been able to broaden
the possibilities of what can be done. Results
can be recreated into graphical representation to discuss. The data conversion provides a visual to
better comprehend these often large sets of raw numbers. In conclusion, as a student of Computer
Science I think that the one of the most astonishing feats may just be the idea
of the pair growing into its own genre and not within the borders of the CompSci
discipline. I am equally impressed by
the component materials needed to build and modify a high end computer as its
usage for mathematical and scientific applications. Hopefully both will continue to flourish
in the future with their ingenuity and popularity. And the next evolution maybe just around the
corner.
References
[1] Origin PC Corporation. https://www.originpc.com/gaming/desktops/genesis/#spec2
[2] McDonagh, J, Barker, D. and Alderson, R. G.. Bringing Computation Science To The Public. SpringerPlus. 2016.
[3] Ostrouchov, G. and Samatova, N. F. High End Computing for Full-Context Analysis and Visualization: when the experiment is done. 2013
[4] Hinsen, K. Computational science: shifting the focus from tools to models. F1000Research. 2014.
[5] Data Visualization. https://en.wikipedia.org/wiki/Data_visualization.
[6] http://boinc.berkeley.edu/
[7] Vetter, J., Windus, T., and Gorda, B. Performance Metrics for High End Computing. 2003.