Artificial Intelligence

How to Represent Data for Neural Networks

Introduction

All machine learning systems use Tensors (multidimensional arrays) as their primary data structure. Data are stored in the form of tensors. For neural networks, data are represented mainly in the following formats;

 

  • Vectors (1D tensors): An array of numbers is called vectors or 1D tensors. A 1D tensor has exactly one axis.
  • Scalars(0D tensors): A Tensor that contains only one number called a scalar (0-dimensional tensor ). In NumPy, float32 or float64 number is a scalar-tensor.

 

All current machine learning systems use tensors as their primary data structure. Tensors are fundamental to the field. A tensor is a container for data at its core. That is nearly continuously numerical data. Therefore, it’s a container for numbers.

 

For example, x = np.array([1,2,3,4,5,6,7,8]), has 8 entries hence called 8 dimensional vector. An 8D tensor and 8D vector are different. An 8D vector has only one dimension along its axis, whereas an 8D tensor has 8 axes and may have any number of dimensions. Dimensionality can denote either the number of entries along a specific axis (as in the case of our 8D vector) or the number of axes in a tensor (such as 8D tensor).

Vectors (1D tensors)

A vector is an array of numbers. It is also called a 1D tensor. A 1D tensor is supposed to have just one axis.

 

The following is a Numpy vector:

 

>>> x = np.array ([12, 3, 6, 14])

 

>>> x

 

array ([12, 3, 6, 14])

 

>>> x.ndim

 

This vector is called a 5-dimensional vector due to having five entries. A 5D vector has five dimensions along its axis & only one axis. A 5D tensor can have any number of dimensions along each axis and has five axes. Dimensionality may denote either the number of entries along a specific axis or the number of axes in a tensor. That may be confusing at times. It’s technically more correct to talk about a tensor of rank 5 in the latter case. The rank of a tensor is being the number of axes. But the unclear notation 5D tensor is common irrespective.  



Matrices (2D tensors)

A matrix is an array of vectors. It is also called a 2D tensor. Often referred to as rows and columns, a matrix has two axes. We can visually understand a matrix as a rectangular grid of numbers.

 

This is a Numpy matrix:>>> x = np.array ([ [5, 78, 2, 34, 0],

 

[6, 79, 3, 35, 1],

 

[7, 80, 4, 36, 2] ] )

 

>>> x.ndim

 

From the first axis, the entries are called the rows. The columns are entries from the second axis. In the earlier example, [5, 78, 2, 34, 0] is the first row of x, and [5, 6, 7] is the first column.



3D and higher-dimensional tensors

We obtain a 3D tensor, which we can visually interpret as a cube of numbers if we pack such matrices in a new array.

 

Following is a Numpy 3D tensor:

 

>>> x = np.array([[[5, 78, 2, 34, 0],

 

[6, 79, 3, 35, 1],

 

[7, 80, 4, 36, 2]],

 

[[5, 78, 2, 34, 0],

 

[6, 79, 3, 35, 1],

 

[7, 80, 4, 36, 2]],

 

[[5, 78, 2, 34, 0],

 

[6, 79, 3, 35, 1],

 

[7, 80, 4, 36, 2]]])

 

>>> x.ndim

 

We can create a 4D tensor by packing 3D tensors in an array and so on. We’ll generally manipulate tensors that are 0D to 4D in deep learning.

What are the key qualities of a tensor?

We can define a tensor by three key attributes;

 

  • Number of axes

For example, a 3D tensor has three axes, and a matrix has two axes. This is also called the tensor’s ndim in Python libraries for instance Numpy.

 

  • Shape of the tensor

This is a tuple of integers. It defines how many dimensions the tensor has along each axis. For case, the previous matrix example has shape (3, 5), and the 3D tensor example has shape (3, 3, 5). A scalar has an empty shape, () and a vector has a shape with a single element, such as (5,).

 

  • Its data type

This is the type of data limited in the tensor. For example, a tensor’s type may be float32. It may also be uint8, float64, and so on. We may see a char tensor on rare times. Tensors live in the pre-allocated, adjoining memory segments, therefore, string tensors don’t exist in Numpy or in most other libraries.

 

Graph Neural Networks

Graph Neural Network, as to how it is called, is a neural network that can directly be applied to graphs. It provides a convenient way for node level, edge level, and graph level prediction tasks. There are mainly three types of graph neural networks;

 

  • Recurrent Graph Neural Network
  • Spatial Convolutional Network
  • Spectral Convolutional Network
Education Ecosystem Staff

The Education Ecosystem Staff consist of various writers who are experienced in their field. The staff strives to educate and inform others of what's happening in the technology industry.

Recent Posts

Blockchain in Elections: A Leap Toward Transparent Democracy

In 2024 we're witnessing a critical point in democratic technology: the integration of blockchain and…

3 weeks ago

Win Big with Our Amazon Fire Max 11 & AirPods Pro Giveaway!

We’re thrilled to announce an exciting opportunity for you to win not one but two…

2 months ago

Unleashing Potential: How Education Ecosystem Transforms Learning into Real-World Success

Acquiring practical skills is crucial for career advancement and personal growth. Education Ecosystem stands out…

4 months ago

The Role of Artificial Intelligence in Modern Software Development

Artificial Intelligence (AI) has been making significant strides in various industries, and the software development…

7 months ago

Highest Stable Coin Yields – (W16 – 2024)

Another week to bring you the top yield platforms for three of the most prominent…

8 months ago

LEDU Token OTC Trading

If you hold a large volume of LEDU tokens above 1 million units and wish…

9 months ago