Today there are many specialized ways of
data storing and retrieving. Photographs, movies, articles and other similar
data can be stored in electronic settings. Given that such storing provides
hugh opportunity, it is called information explosion. Search engines scan the
related data based on the keywords given by the internet users; based on the
data gathered by firms they offer sale proposals. There are computer programs
offer users the related texts via internet; these are also used by websites.
Internet users’ data through emails, messages shared via social media such as
facebook and twitter can be analysed for security and terrorist threat
purposes. In scientific research there are many search engines, including
GoPubMed in the field of biomedicine. In short, the computer-assisted text
analysis has become widespread which provide search opportunities using
different technological ways and software.
The data in libraries and in digital
settings are mostly in the form of the unstructured texts, audio and visual
files. These data should be transformed into information for researchers. This
topic is significant for novice research techniques. Therefore, new computer
software is needed to analyse these data correctly and systematically.
In educational research the use of text-,
audio- and visual-based data has become frequent in recent years. In parallel
to this change new techniques have appeared in regard to such data. On the
other hand, there are different analyses for audio and textual data and for
visual (film and photographs, etc.).
The use of text mining has been expanding.
For instance, it has been used for customer relationship management. Public
institutions also use it to reveal fake data. It may analyse illness reports,
economical reports, and unpublished research. Terrorism, aggravated theft,
criminal offenses can be traced using text mining technique.
In text mining uncovering patterns is similar
to factor analysis. In this technique patterns in texts are uncovered using
computers. Similar to the analysis of frequency, correlation and significance
in qualitative research techniques, the frequency of words used in a text and
connectedness (correlations and connection) are identified. The other goal is
to have visual representations of these data. In text analysis it is possible
to reveal the feelings, views of individuals which can also be done on audio
and photograph data sets.
In all scientific research the basic
requirements are strong measurement, clean data sets which are error free and
structured. The reliability of the study results is based on these qualities of
the data.
It is thought that text mining will be
easier in future, because the Web 3.0 technologies, also called semantic web,
would analyse texts and objects in internet together with their meaning. In
other words, the texts would be put into Internet after the use of text mining.
Metadata vocabulary groups (ontology), groups of words and concepts (maps) will
be ready in texts.
The basic aim of the Web 3.0 technology or
semantic web is to make data usable and connected information. The Web 3.0
technology would assume the role of assistant that knows everything. In present
period when search engines are used only websites containing the related words
are listed.
When the Web 3.0 technology or semantic
web is used, the engines will interpret the websites and only related websites
will be offered.
The Web ontology language will be standard
and it identifies common vocabulary or terminology. The resource description
framework (RDF) will be put in files containing vocabulary hierarchy and the
correlations of the objects.
Primary Language | English |
---|---|
Subjects | Studies on Education |
Journal Section | Research Articles |
Authors | |
Publication Date | June 1, 2016 |
Acceptance Date | November 30, 2016 |
Published in Issue | Year 2017 Volume: 4 Issue: 1 |