also make users receive a lot of goods, such as special
knowledge point explanation. Through this part of
learning, users will have a more thorough
understanding of their knowledge in new technology
and e-commerce, and gradually understand the
unknown areas in the process of learning knowledge,
so that users can not only realize learning, but also
timely reflection.
During the training, you need to create a shared
database, upload resources to the platform using the
cloud computing technology, and form a collection of
different types of storage devices using functions
such as the distributed file system to implement cloud
storage. Including enterprise internal information
sharing and external information sharing.
Information sharing is responsible for cataloging,
publishing, storing and retrieving data. For example,
enterprise internal information sharing, the first
display is the data catalog, the specific content
includes department planning, technology sharing,
department training. After the information is
published, it is stored and users can query the content
they want to know through retrieval. Through
network data sharing, employees can have a better
understanding of the internal development of the
enterprise, and their own positioning is also very
clear, and the goal of common progress of employees.
(
Zong, 2012
)
After training, employees will enhance their
understanding of new technology and e-commerce,
and improve their knowledge application ability.
4.2.3 Guidance and Management
Guided management is the e-commerce human
resource managers after forecasting the employees,
and then combined with all aspects of the employees
to carry out targeted guidance, so as to achieve
accurate prediction, efficient decision-making goals,
and then to achieve efficient management. This part
is divided into two parts, one is the prediction part,
the other is the targeted guidance part.
Predictions. The forecast is divided into four
parts. First, database data analysis. Database data
analysis is to preprocess the original data. Flume is
used to collect original data, Sqoop is used to transmit
data, and the data is stored in HDFS. Mapreduce and
Spark are used to calculate and clean data, and the
analyzed data is performed by Hive. Second, state
data analysis. Status data analysis is to re-analyze the
analyzed data, but this analysis needs specific
algorithms to realize the analysis of the future
development direction, such as the entry rate,
turnover rate, internal turnover rate. Third, operation
trend forecast. Operation trend prediction is to
monitor the data information existing in the operation
process. Fourth, predictive warnings. After
monitoring and analysis, the data of abnormal state
predicted by trend will be sent to the platform
manager, who will inform relevant personnel or deal
with it directly. This part enables the enterprise to
predict the development trend, development trend
and potential turnover tendency of employees, so as
to prepare for the targeted guidance in the next step.
Targeted guidance. Targeted guidance is designed
according to the age, position, demand, performance,
salary, welfare and other aspects of employees. This
part makes use of the basic technology of big data,
that is, by using Flume tool to collect employee data
information, and then through Sqoop transmission
and HDFS storage, the data is calculated and cleaned.
In this process, you need to use the Mapreduce tool,
which uses Map to preprocess data, screen out the
data to be used, and group the data. Then, use the
Reduce tool to calculate the data using custom
calculation methods and summarize the data. After
collecting the data, use Hive to analyze the cleaned
data, and use Echarts to display the data results.
Managers give targeted guidance to employees
according to the results of previous prediction and
present data display. For example, different ways are
adopted to motivate employees of different ages to
work efficiently according to their different work
requirements.
4.3 Technical Support
4.3.1 Big Data Technology
The big data technologies used in this document are
basic technologies, such as Flume, Sqoop, HDFS,
Mapreduce, Spark, Hive, and Echarts. These basic
technologies are used in each part of the process of
collecting, storing, calculating and cleaning raw data,
analyzing and querying, and applying results. And
these technologies all have the characteristics of
simple operation, fast operation, large scale, high
security, and the tools used in the data visualization
link display diversification. Through the use of this
technology, the data management is intelligent, and
the data obtained is the expected data, in addition, the
data results displayed have diversity, both can be
represented by dynamic graph, and can be displayed
in other static ways.