Canadavisa International Students, Pathophysiology Of Neurological Disorders Ppt, Brian Sicknick Autopsy Results, Spirit Of Queensland Seating Plan, Text Selection Not Working In Chrome, Hamilton Outdoor Furniture, Players Who Have Won Premier League And Bundesliga, Leisure Industry Trends 2021, Netherlands Away Jersey, What Legend Was There About The Polish Cavalry Units?, Manic Pixie Dream Girl Names, " /> Canadavisa International Students, Pathophysiology Of Neurological Disorders Ppt, Brian Sicknick Autopsy Results, Spirit Of Queensland Seating Plan, Text Selection Not Working In Chrome, Hamilton Outdoor Furniture, Players Who Have Won Premier League And Bundesliga, Leisure Industry Trends 2021, Netherlands Away Jersey, What Legend Was There About The Polish Cavalry Units?, Manic Pixie Dream Girl Names, " /> Canadavisa International Students, Pathophysiology Of Neurological Disorders Ppt, Brian Sicknick Autopsy Results, Spirit Of Queensland Seating Plan, Text Selection Not Working In Chrome, Hamilton Outdoor Furniture, Players Who Have Won Premier League And Bundesliga, Leisure Industry Trends 2021, Netherlands Away Jersey, What Legend Was There About The Polish Cavalry Units?, Manic Pixie Dream Girl Names, " />
Close

lstm with multiple input features

Speci cally, we design input gates which are I leave you an example importing training data of 5 input variables and one output. So you need to … It is just a new LEGO piece to use when building your NN :) Time series forecasting is challenging, escpecially when working with long sequences, noisy data, multi-step forecasts and multiple input and output variables. 2D LSTM Networks: First, the input images was split into grids, then pass to the LSTM layer. I've been playing around with LSTM Recurrent Neural Networks in Keras and I know that LSTM nets take data input in the format [# of samples, time steps, sample features]. Raw. Max-Min Normalization: Since different input features may have different magnitude, we standardize input features to the range [0,1] with Max-Min normalization as shown in Eq. Using stateful mode of LSTM. # Time Series Testing. This can be achieve this by using the observation from the last time step (t-1) as the input and the observation at the current time step (t) as the output in a time series. One output is classification and other is regression. callbacks. Let’s understand them, We can transform the input data into LSTM’s expected structure using numpy.reshape (). This is a great benefit in time series forecasting, where classical linear methods can be difficult to adapt to multivariate or multiple input forecasting problems. MULTIPLE-TARGET DEEP LEARNING FOR LSTM-RNN BASED SPEECH ENHANCEMENT Lei Sun1, Jun Du1, Li-Rong Dai1, Chin-Hui Lee2 1University of Science and Technology of China, Hefei, Anhui, P. R. China 2Georgia Institute of Technology, Atlanta, GA.USA sunlei17@mail.ustc.edu.cn, {jundu,lrdai}@ustc.edu.cn, chl@ece.gatech.edu 3 (c). and output gates. In this paper, we propose a novel multi-input LSTM unit to distinguish mainstream and auxiliary factors. chitecture to extract the spatio-temporal features from the given input video. Consider using some dimensionality reduction technique, because that's going to … Firstly, let me explain why CNN-LSTM model is required and motivation for it. Fork 13. One final observations: 450 input features on 801 timesteps is a lot. x t and h t –1 are the input and the previous hidden state, respectively. In fact, when predicting the performance of a student on a given pair of similar exercises, the … All features. 9.2.1.They are processed by three fully-connected layers with a sigmoid activation function to compute the values of the input, forget. callbacks. long-term dependancy) Bidirectional models can provide remarkably outperform unidirectional models. Layers are the number of cells that we want to put together, as we described. multi_lstmOMNI_noStand.m. The raw time-series data from single or multiple sensors is fed into the LSTM input layer via an input preparation process. That means you know size of timesteps and features. How to pass multiple inputs (features) to LSTM using Tensorflow? In our case timesteps is 50, number of input features is 2 (volume of stocks traded and the average stock price). long short-term memory (LSTM)Hochreiter and Schmidhuber(1997) also su ers from the aforementioned problem: it may be harmful when useless factors are simply concatenated into the input vector of LSTM. The output from these unrolled cells is still (batch size, number of time steps, hidden size). The LSTM input layer is defined by the input_shape argument on the first hidden layer. one-to-many: one input, variable outputs. LSTM is a subnet that allows to easily memorize the context information for long periods of time in sequence data. It selects information mainly through three gate structures: input gate, forget gate, and output gate. multiple features). Time series prediction with multiple sequences input - LSTM - 1. model = Sequential([ LSTM(32, input_shape = (801, 450)), Dense(6, activation='softmax') ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) That's why you had an error about the number of input dimensions. The Conv1D layers smoothens out the input time-series so we don’t have to add the rolling mean or rolling standard deviation values in the input features. Currently I have built my architecture where I have an embedding layer which goes to lstm for the sequences and then I add another input layer for some extra features. To further aid in lip-reading, more visual input data can be gathered in addition to color image sequences, such as depth image sequences. Therefore we define two input layers and treat them in separate models (nlp_input and meta_input). Our NLP data goes through the embedding transformation and the LSTM layer. This code is also capable of processing datasets with more than 2 features Feel free to modify the n_steps_in … Here, we’ll have a look at how to feed Time Series data to an Autoencoder. In fact, this new LSTM cell can directly take in a sequence of labels as inputs, which means that it can be used categorical features only and still produce good results. Each row in the transpose or each array set in the above array has different features with same timestamp. multi-ts-lstm.py. Multiple length sequence input, predicting multiple step ahead). 2. I want to modify that code to proceed time-series prediction for 1 output using 5 inputs. … Table 2: Example of the normalized dataset, after using min max scaler.. We will have 6 groups of parameters here comprising weights and biases from: - Input to Hidden Layer Affine Function - Hidden Layer to Output Affine Function - Hidden Layer to Hidden Layer Affine Function Building the LSTM model representation of words where each input is represented by many features and each feature is involved in many possible inputs. Therefore, it can also be regarded as using multiple LSTM feature extractors. Figure 1 presents the LSTM architecture, which contains four neural network layers and interacts in a special structure. Then the test is done and finally it is graphed. In keras LSTM, the input needs to be reshaped from [number_of_entries, number_of_features] to [new_number_of_entries, timesteps, number_of_features]. Finally, let’s revisit the documentation arguments of Pytorch [6] for an LSTM model. The input is typically fed into a recurrent neural network (RNN). input_size – The number of expected features in the input x This network is based on the basic structure of RNNs, which are designed to handle sequential data, where the output from the previous step is fed as input to the … We decided to use LSTM (i.e., Long Short Term Memory model), an artificial recurrent neural network (RNN). As shown in Fig. It builds a few different styles of models including Convolutional and Recurrent Neural Networks (CNNs and RNNs). The LSTM input layer must be 3D. To simplify the problem, many systems are restricted to limited numbers of phrases and speakers. Set the size of the fully connected layer to the number of responses. Speci cally, we design input gates which are The same applies for stacked LSTM's. Prepare Input Data for LSTM. It has the capability of forecasting 30 steps ahead data based on previous 60 data with 2 features. Also, knowledge of LSTM or GRU models is preferable. Each LSTM cell has three inputs, and and two outputs and. For a given time t, is the hidden state, is the cell state or memory, is the current data point or input. The first sigmoid layer has two inputs- and where is the hidden state of the previous cell. Raw. The input gate determines what information should be part of the cell state (the memory of the LSTM).It is composed of the previous hidden state h(t-1) as well as the current time step x(t).The input gate considers two functions, the first one filters the previous hidden state as well as the current time step by a sigmoid function. Univariate 是指: input 为多个时间步, output 为一个时间的问题。 数例: 模型的 Keras 代码: 其中: n_steps 为输入的 X 每次考虑几个时间步 n_features 为每个时间步的序列数 这个是最基本的模型结构,我们后面几种模型会和这个进行比较。 Re: Time series prediction with multiple sequences input - LSTM - Google Groups. I tried as default LSTM for sequence regression by changing the time series in cells with four features and 720 time steps but I get the following error: The multi-behavior with bottleneck features LSTM architecture has 64 hidden LSTM units and 64-dimension bottleneck features. If you want the 3 features in your training data. Furthermore, the ELF-LSTM architecture integrates multiple traffic features by So, it is a multiclass classification problem. I'm getting really tripped up on the input / output for LSTM. However, I have 160 samples or files that represent the behavior of a user in multiple days. We then concatenate the two attention feature vectors with the word embedding and this three-way concatenation is the input into the decoder LSTM. We’ll use a couple of LSTM layers (hence the LSTM Autoencoder) to capture the … 9.2.1.1. Am I … 5 and Table 2 showing. Each of these units gets input from all the (weighted) activations of the previous layer. Introduction. Sometimes, dropout is added between LSTM cells. After downloading the dataset, you will find two types of data. The input and output need not necessarily be of the same length. tags: NLP algorithm . I am actually working on a similar problem you are working on (i.e. - stxupengyu/Multi-LSTM-for-Regression If you are not familiar with LSTM, I would prefer you to read LSTM- Long Short-Term Memory. LSTM input and output format keras. In this kernel I do perform a multi-class classification with LSTM (Keras). 3 (b) and a multi-time steps outputs LSTM as Fig. The repeating module in a standard RNN contains a single layer. LSTMs also have this chain like structure, but the repeating module has a different structure. Instead of having a single neural network layer, there are four, interacting in a very special way. The repeating module in an LSTM contains four interacting layers. To create an LSTM network for sequence-to-one regression, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, and a regression output layer. In standard RNNs, this repeating module will have a very simple structure, such as a single tanh layer. The repeating module in a standard RNN contains a single layer. LSTMs also have this chain like structure , but the repeating module has a different structure . As the input features are on different scale, we need to normalize the features. By looking at a lot of such examples from the past 2 years, the LSTM will be able to learn the movement of prices. You can stack as many LSTM layers as you want. Equation 1. This tutorial is an introduction to time series forecasting using TensorFlow. At every time step an LSTM, besides the recurrent input. This is covered in two main parts, with subsections: Forecast for a single timestep: A single feature. 2018. omni2.txt. scaled.head(4) I am quite new to Keras, but this is the way I am trying to solve it. data.CATEGORY.value_counts() Out [3]: e 152469 b 115967 t 108344 m 45639 Name: CATEGORY, dtype: int64. If you are not familiar with LSTM, I would prefer you to read LSTM- Long Short-Term Memory. 3.4 Transformer with 2D-CNN Features multi-ts-lstm.py. In images, this temporal dependency learning is converted to the spatial domain. Just like in GRUs, the data feeding into the LSTM gates are the input at the current time step and the hidden state of the previous time step, as illustrated in Fig. Raw. 1. Hi, I am not really sure what TimeDistributedDense does, I just used a normal Dense layer with linear activation. RNN-like models feed the prediction of the current run as input to the next run. 2020-06-12 Update: This blog post is now TensorFlow 2+ compatible! Figure-B represents Deep LSTM which includes a number of LSTM layers in between the input and output. I am trying to train a LSTM, but I have some problems regarding the data representation and feeding it into the model. 3, the input states of the first Graph LSTM layer come from the previous convolutional feature maps. And then we create a multi-feature inputs LSTM as Fig. Significant amount of time and attention may go in preparing the data that fits an LSTM. Based on this input, the hidden states of the LSTM unit are updated and fed into the next copy of the LSTM unit together with the feature vector at time t-4 and so on until time t-1. The first sigmoid layer has two inputs– and where is the hidden state of the previous cell. Using multiple features for predictions in an LSTM network Hey everyone. From this perspective it is not different than ordinary neural network layers. The shape of the array is samples x lookback x features. Time series prediction with multiple sequences input - LSTM - 1. $\begingroup$ (len(dataX), 3, 1) runs LSTM for 3 iterations, inputting a input vector of shape (1,). Hence, parameters are well distributed within multiple layers. In Sequence to Sequence Learning, an RNN model is trained to map an input sequence to an output sequence. Where the X will represent the last 10 day’s prices and y will represent the 11th-day price. LSTM is a bit more demanding than other models. We build a new architecture based on Inception 3D (I3D) [3] and long short term memory (LSTM) [25] for spatio-temporal information capture and ResNet [11] for spatial informa- Fig.1. The input data to an LSTM model is a 3-dimensional array. If the input is already the result from an LSTM layer (or a feedforward layer) then the current LSTM can create a more complex feature representation of the current input. This raises the question as to whether lag observations for a univariate time series can be used as features for an LSTM and whether or not this improves forecast performance. In this example, each input data point has 2 timesteps, each with 3 features; the output data has 2 timesteps (because return_sequences=True ), each with 4 data points (because that is the size I pass to LSTM ). RNN considers both the current input and the results of the last hidden layer, unlike traditional neural networks in which there is no dependency between the calculation results. I have the time component in my data but now the model would be Multiple input and multiple outputs. The key is in the data entry. We use a bidirectional LSTM model and combine its output with the metadata. 1 Answer1. The bottom LSTM unit equipped with input and output gates, extracts the rst order feature representation from current word. To further improve on this Multi-state LSTM, a next step would be to take into account the correlations between multiple labels. Time series prediction with multiple sequences input - LSTM - 1. There are four main variants of sequence models: one-to-one: one input, one output. Which means that it is quite useless to even have recurrent connections since there can't be any feedback from previous iterations. Suppose the shape of the input data is {X 1, X 2, X 3, …, X m}.Taking the t th sample as an example, X t is the input data at the current moment. Then use . ConvLSTM is a variant of LSTM (Long Short-Term Memory) containing a convolution operation inside the LSTM cell. # Time Series Testing. ai, cnn, lstm Jan 28, 2019 . You just need to prepare your data such as they will have shape [batch_size, time_steps, n_features] , which is the format required by all main DL libraries (pytorch, keras and tensorflow). A step forward to Time Series Forecasting. where f t denotes the output of forget gate to the network at time step t, where σ is the logistic sigmoid function. Already featured data with a 561-feature vector with time and frequency domain variabl… # Time Series Testing. or concatenating of input data for multiple frames. We separately compute attention for each of the two encoded features (hidden states for the LSTM encoder and P3D features) based on the previous decoder hidden state. How-ever, in order to achieve long-term memory, the RNN model requires a significant amount of model training time. LSTM uses 4 RNNs to handel more complex features of text (e.g. The DLSTM, which is a stack of LSTM units, has different order of feature representations at different depth of LSTM unit. Neural networks like Long Short-Term Memory (LSTM) recurrent neural networks are able to almost seamlessly model problems with multiple input variables. The inputs will be time series of past performance data of the application, CPU usage data of the server where application is hosted, the Memory usage data, network bandwidth usage etc. multi_lstmOMNI_noStand.m. The LSTM model will need data input in the form of X Vs y. When I the training finishes I get the following error: long-short-term-memory (DLSTM) based fea-ture mapping to learn feature representation for CNN. Both the models are a special kind of RNN, capable of learning long-term dependencies. callbacks. W f, W i, W o, b f, b i and b o are weight matrices which are learned.. 3.2 Improved long short-term memory 1 Layer LSTM Groups of Parameters. In this letter, we propose a CNN-LSTM detector which first uses the CNN to extract the energy-correlation features from the covariance matrices generated by the sensing data, then the series of energy-correlation features corresponding to multiple sensing periods are input into the LSTM so that the PU activity pattern can be learned. Ordinary neural network layers consists of multiple units. Linkedin. 600(timesteps) and 5(features). Features are nothing but the time dependent variables and multiple features are to be considered for every time stamp. I am trying to learn a latent representation for text sequence (multiple features (3)) by doing reconstruction USING AUTOENCODER. Accepted Answer: Marcelo Olmedo. We have also scaled the values between 0 and 1 for better accuracy using minmaxscaler. I have to predict the performance of an application. Anomalies represent deviations from the intended system operation and can lead to decreased efficiency as well as partial or complete system failure. (len(dataX), 1, 3) runs LSTM for 1 iteration. At each time step, the proposed LSTM module combines the point cloud features from the current frame with the hidden and memory features from the previous frame to predict the 3d objects As we know, one of the most effective algorithms to predict Time Series data is the LSTM (Long Short Term Memory) .In this article, I am going to show you how to build and deploy an LSTM Model for stock price forecasting in different forms of input data. LSTMs can … As some of the sequences are shorter than the maximum pad length or a number of time steps I am considering (seq_length=15), I am not sure if reconstruction will learn to ignore the timesteps or not for calculating loss or accuracies. import keras. Multivariate problem => multiple parallel input sequences, each from different source. long short-term memory (LSTM)Hochreiter and Schmidhuber(1997) also su ers from the aforementioned problem: it may be harmful when useless factors are simply concatenated into the input vector of LSTM. Our method consumes a sequence of point clouds as input. In [4]: When I the training finishes I get the following error: Consolidation - consolidation is the process of combining disparate data (Excel spreadsheet, PDF report, database, cloud storage) into a single repository. We are using using Min Max scalar # normalizing input features scaler = MinMaxScaler(feature_range=(0, 1)) scaled = scaler.fit_transform(values) scaled =pd.DataFrame(scaled) Looking at the data after it is normalized. The Long Short-Term Memory (LSTM) network in Keras supports multiple input features. For a given time t, is the hidden state, is the cell state or memory, is the current data point or input. We need to first convert input data X into an array and then use the reshape () X, y= np.array (X), np.array (y) The meaning of the 3 input dimensions are: samples, time steps, and features. import keras. When LSTM is used to deal with regression problems, the time window dimension of each input feature is different. Hello everyone, I have the attached code and the attached data file here. Choosing a model or where the fun begins…. Set the size of the sequence input layer to the number of features of the input data. #M class has way less data than the orthers, thus the classes are unbalanced. After we created all these three different kinds of neural networks, we feed them with the same data sequence which we have created in Section 3.2.1 and got results as Fig. I want to modify that code to proceed time-series prediction for 1 output using 5 inputs. In [3]: link. In this paper, we want to investigate the effectiveness of long short-term memory (LSTM) [4] In this tutorial, a LSTM model is developed. i t and o t denote the output of input gate and output gate, respectively. Also, knowledge of LSTM or GRU models is preferable. Introduction. We use cookies on Kaggle to deliver our services, analyze web traffic, and improve your experience on the site. In Sequence to Sequence Learning, an RNN model is trained to map an input sequence to an output sequence. Accepted Answer: Marcelo Olmedo. Star 27. The LSTM model in Keras assumes that the data is divided into input (x) and output (y) components. By using Kaggle, you agree to our use of cookies. The advantage is that the input values fed to the network not only go through several LSTM layers but also propagate through time within one LSTM cell. lstm1: 128 LSTM units, with return_sequences=True. A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. (len(dataX), 1, 3) runs LSTM for 1 iteration. Conclusion of this part: Our stateful LSTM model works quite well to learn long sequences. Consider batch_size =1, and time_sequence=1. CNNs are used in modeling problems related to spatial inputs like images. multi-ts-lstm.py. Like def rnd_io(n_features,n_timesteps): arr = [] for i in range(n_features): arr.append(np.random.randint(100, size=(n_timesteps, 1))) return np.array(arr) – Tomas Trdla Jul 2 '19 at 21:29 omni2.txt. In a sense, Autoencoders try to learn only the most important features (compressed version) of the data. The subnet includes three gates: the input gate, the Hira Majeed on 5 Jan 2021. The hierarchical nature of our architec-ture allows us to operate at different time scales. Data preparation for LSTM networks involves consolidation, cleansing, separating the input window and output, scaling, and data division for training and validation. Specifically, we use the Word2Vec word embedding model [3] for distributed representation of social posts. This article aims to tackle the problem of group activity recognition in the multiple-person scene. None for any number of rows (observations). such as with facial features, skin colors, speaking speeds, and intensities. Each LSTM cell has three inputs, and and two outputs and. input_signal: layer shows the shape of the input data: 160 time steps, 12 features. Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning.Unlike standard feedforward neural networks, LSTM has feedback connections.It can not only process single data points (such as images), but also entire sequences of data (such as speech or video).

Canadavisa International Students, Pathophysiology Of Neurological Disorders Ppt, Brian Sicknick Autopsy Results, Spirit Of Queensland Seating Plan, Text Selection Not Working In Chrome, Hamilton Outdoor Furniture, Players Who Have Won Premier League And Bundesliga, Leisure Industry Trends 2021, Netherlands Away Jersey, What Legend Was There About The Polish Cavalry Units?, Manic Pixie Dream Girl Names,

Vélemény, hozzászólás?

Az email címet nem tesszük közzé. A kötelező mezőket * karakterrel jelöljük.

0-24

Annak érdekében, hogy akár hétvégén vagy éjszaka is megfelelő védelemhez juthasson, telefonos ügyeletet tartok, melynek keretében bármikor hívhat, ha segítségre van szüksége.

 Tel.: +36702062206

×
Büntetőjog

Amennyiben Önt letartóztatják, előállítják, akkor egy meggondolatlan mondat vagy ésszerűtlen döntés később az eljárás folyamán óriási hátrányt okozhat Önnek.

Tapasztalatom szerint már a kihallgatás első percei is óriási pszichikai nyomást jelentenek a terhelt számára, pedig a „tiszta fejre” és meggondolt viselkedésre ilyenkor óriási szükség van. Ez az a helyzet, ahol Ön nem hibázhat, nem kockáztathat, nagyon fontos, hogy már elsőre jól döntsön!

Védőként én nem csupán segítek Önnek az eljárás folyamán az eljárási cselekmények elvégzésében (beadvány szerkesztés, jelenlét a kihallgatásokon stb.) hanem egy kézben tartva mérem fel lehetőségeit, kidolgozom védelmének precíz stratégiáit, majd ennek alapján határozom meg azt az eszközrendszert, amellyel végig képviselhetem Önt és eredményül elérhetem, hogy semmiképp ne érje indokolatlan hátrány a büntetőeljárás következményeként.

Védőügyvédjeként én nem csupán bástyaként védem érdekeit a hatóságokkal szemben és dolgozom védelmének stratégiáján, hanem nagy hangsúlyt fektetek az Ön folyamatos tájékoztatására, egyben enyhítve esetleges kilátástalannak tűnő helyzetét is.

×
Polgári jog

Jogi tanácsadás, ügyintézés. Peren kívüli megegyezések teljes körű lebonyolítása. Megállapodások, szerződések és az ezekhez kapcsolódó dokumentációk megszerkesztése, ellenjegyzése. Bíróságok és más hatóságok előtti teljes körű jogi képviselet különösen az alábbi területeken:

×
Ingatlanjog

Ingatlan tulajdonjogának átruházáshoz kapcsolódó szerződések (adásvétel, ajándékozás, csere, stb.) elkészítése és ügyvédi ellenjegyzése, valamint teljes körű jogi tanácsadás és földhivatal és adóhatóság előtti jogi képviselet.

Bérleti szerződések szerkesztése és ellenjegyzése.

Ingatlan átminősítése során jogi képviselet ellátása.

Közös tulajdonú ingatlanokkal kapcsolatos ügyek, jogviták, valamint a közös tulajdon megszüntetésével kapcsolatos ügyekben való jogi képviselet ellátása.

Társasház alapítása, alapító okiratok megszerkesztése, társasházak állandó és eseti jogi képviselete, jogi tanácsadás.

Ingatlanokhoz kapcsolódó haszonélvezeti-, használati-, szolgalmi jog alapítása vagy megszüntetése során jogi képviselet ellátása, ezekkel kapcsolatos okiratok szerkesztése.

Ingatlanokkal kapcsolatos birtokviták, valamint elbirtoklási ügyekben való ügyvédi képviselet.

Az illetékes földhivatalok előtti teljes körű képviselet és ügyintézés.

×
Társasági jog

Cégalapítási és változásbejegyzési eljárásban, továbbá végelszámolási eljárásban teljes körű jogi képviselet ellátása, okiratok szerkesztése és ellenjegyzése

Tulajdonrész, illetve üzletrész adásvételi szerződések megszerkesztése és ügyvédi ellenjegyzése.

×
Állandó, komplex képviselet

Még mindig él a cégvezetőkben az a tévképzet, hogy ügyvédet választani egy vállalkozás vagy társaság számára elegendő akkor, ha bíróságra kell menni.

Semmivel sem árthat annyit cége nehezen elért sikereinek, mint, ha megfelelő jogi képviselet nélkül hagyná vállalatát!

Irodámban egyedi megállapodás alapján lehetőség van állandó megbízás megkötésére, melynek keretében folyamatosan együtt tudunk működni, bármilyen felmerülő kérdés probléma esetén kereshet személyesen vagy telefonon is.  Ennek nem csupán az az előnye, hogy Ön állandó ügyfelemként előnyt élvez majd időpont-egyeztetéskor, hanem ennél sokkal fontosabb, hogy az Ön cégét megismerve személyesen kezeskedem arról, hogy tevékenysége folyamatosan a törvényesség talaján maradjon. Megismerve az Ön cégének munkafolyamatait és folyamatosan együttműködve vezetőséggel a jogi tudást igénylő helyzeteket nem csupán utólag tudjuk kezelni, akkor, amikor már „ég a ház”, hanem előre felkészülve gondoskodhatunk arról, hogy Önt ne érhesse meglepetés.

×