0. Tensorflow Fundamentals

(참고) udemy - TF Developer in 2022

(1) Check Tensorflow Version

import tensorflow as tf
print(tf.__version__) 
2.5.0

(2) Create **Tensors **(tf.constant() )

scalar = tf.constant(7)
vector = tf.constant([10, 10])
matrix = tf.constant([[10, 7],
                      [7, 10]])
                      
print(scalar)
print(vector)
print(matrix)
print(scalar.ndim)
print(vector.ndim)
print(matrix.ndim)
print(scalar.shape)
print(vector.shape)
print(matrix.shape)
tf.Tensor(7, shape=(), dtype=int32)
tf.Tensor([10 10], shape=(2,), dtype=int32)
tf.Tensor(
[[10  7]
 [ 7 10]], shape=(2, 2), dtype=int32)
0
1
2
()
(2,)
(2, 2)

(3) Create Tensor with certain type

matrix = tf.constant([[10., 7.],
                      [3., 2.],
                      [8., 9.]], dtype=tf.float16)

(4) Create Tensor ( tf.Variable() )

tf.Variable() & tf.constant() 의 차이점

  • tf.Variable() : mutable (바꿀 수 O)
  • tf.constant() : immutable (바꿀 수 O)


tensor 생성 후, element 바꾸기

tensor = tf.Variable([10, 7])
#tensor[0] = 7  #------------> ERROR 
tensor[0].assign(7)


둘 중 뭘 고르는게 맞을까?

Which one should you use? tf.constant() or tf.Variable()?

  • 문제 by 문제
  • 하지만, 대부분 tf가 알아서 자동으로 골라줄 것! ( 데이터 로딩 / 모델링 시 )


(5) Create random tensors

seed1 = tf.random.Generator.from_seed(42) 
random_1 = seed1.normal(shape=(3, 2)) # N (0,1)

seed2 = tf.random.Generator.from_seed(42)
random_2 = seed2.normal(shape=(3, 2)) # N (0,1)

random_1 == random_2
<tf.Tensor: shape=(3, 2), dtype=bool, numpy=
array([[ True,  True],
       [ True,  True],
       [ True,  True]])


(7) Shuffle Tensors

not_shuffled = tf.constant([[10, 7],
                            [3, 4],
                            [2, 5]])

tf.random.shuffle(not_shuffled) #------------ 매번 달라
tf.random.shuffle(not_shuffled, seed=42)#---- 매번 같아


아래 둘의 결과는 다르다!

  • tf.random.set_seed(42) : global seed
  • tf.random.shuffle(seed=42) : operation seed


# global random seed
tf.random.set_seed(42) 
tf.random.shuffle(not_shuffled)

# operation random seed
tf.random.set_seed(42)
tf.random.shuffle(not_shuffled, seed=42)


(8) zeros & ones

tf.ones((3, 2))
tf.zeros((3, 2))


(9) numpy to tensor

data_np = np.arange(1, 25, dtype=np.int32) 
data_tensor = tf.constant(data_np,  shape=[2, 4, 3]) 


(10) shape, rank, size

data = tf.zeros((2, 3, 4, 5))
#------------------------------------#
print(data.dtype)
print(data.ndim)
print(data.shape)
print(data.shape[0])
print(data.shape[-1])
print(tf.size(data).numpy())
<dtype: 'float32'>
4
(2, 3, 4, 5)
2
5
120


(11) indexing

data[:2, :2, :2, :2]


(12) Change dimension & Squeeze

data2_1 = data[..., tf.newaxis] 
data2_2 = tf.expand_dims(data, axis=-1) 
#-------------------------------------------------------#
print(data.shape)
print(data2_1.shape)
print(data2_2.shape)
(2, 3, 4, 5)
(2, 3, 4, 5, 1)
(2, 3, 4, 5, 1)


X = tf.constant(np.random.randint(0, 100, 50), shape=(1, 1, 1, 1, 50))
X.shape
X.ndim
TensorShape([1, 1, 1, 1, 50])
5


Squeeze : 1 dimension짜리 모두 지우기

X_squeezed = tf.squeeze(X)
X_squeezed.shape
X_squeezed.ndim
TensorShape([50])
1


(13) Tensor Operations

tf.multiply(data,10) == data*10
tf.subtract(tensor,10) == tensor-10
tf.add(tensor,10) == tensor+10
tf.divide(tensor,10) == tensor/10


(14) Matrix Multiplication

data = tf.ones((3,5))
#----------------------------------#
data2 = data @ tf.transpose(data)
data3 = tf.matmul(data,tf.transpose(data))
#----------------------------------#
data2 == data3
<tf.Tensor: shape=(3, 3), dtype=bool, numpy=
array([[ True,  True,  True],
       [ True,  True,  True],
       [ True,  True,  True]])>


tf.matmul(a=X, b=Y, transpose_a=True, transpose_b=False)



(15) Dot Product

print(tf.transpose(X))
print(Y)
tf.tensordot(tf.transpose(X), Y, axes=1)
tf.Tensor(
[[1 3 5]
 [2 4 6]], shape=(2, 3), dtype=int32)
tf.Tensor(
[[ 7  8]
 [ 9 10]
 [11 12]], shape=(3, 2), dtype=int32)
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 89,  98],
       [116, 128]], dtype=int32)>


(16) Reshape

tf.reshape(Y,(2, 3))


(17) Change Datatype

X1 = tf.cast(X, dtype=tf.float16)
Y1 = tf.cast(Y, dtype=tf.float32)


(18) abs, min, max, mean, sum

Absolute

tf.abs(X)


reduce_min(), reduce_max(), reduce_mean(), reduce_sum()

X = tf.constant(np.random.randint(low=0, high=100, size=50))
tf.reduce_min(X)
tf.reduce_max(X)
tf.reduce_mean(X)
tf.reduce_sum(X)
0
99
49
2482


(19) Positional Max & Min

X = tf.constant(np.random.random(50))
tf.argmax(X)
tf.argmin(X)
8
31


(20) One-hot encoding

some_list = [3, 1, 0, 2]
tf.one_hot(some_list, depth=4)


(21) Square, Log, Square Root

H = tf.constant(np.arange(1, 10))
tf.square(H)
# tf.sqrt(H) ---- ERROR
H2 = tf.cast(H, dtype=tf.float32)
tf.sqrt(H2)
tf.math.log(H2)


(22) Manipulate tf.Variable()

  • assign & assign_add
  • 기본으로 inplace 됨
I = tf.Variable(np.arange(0, 5)) #---- 0,1,2,3,4
I.assign([0, 1, 2, 3, 50]) #---------- 0,1,2,3,50
I.assign_add([10, 10, 10, 10, 10])#--- 10,11,12,13,60


(23) Tensors & Numpy

default type

  • tensor : float32
  • numpy : float64


  • numpy array \(\rightarrow\) tensor
J = tf.constant(np.array([3., 7., 10.]))
J
<tf.Tensor: shape=(3,), dtype=float64, numpy=array([ 3.,  7., 10.])>


  • tensor \(\rightarrow\) numpy array
np.array(J)
array([ 3.,  7., 10.])


(24) @tf.function

역할 :

  • turns a Python function into a callable TensorFlow graph
  • when you export your code (to potentially run on another device), TensorFlow will attempt to convert it into a fast(er) version of itself (by making it part of a computation graph).
@tf.function
def tf_function(x, y):
  return x ** 2 + y


(25) GPUs

check available GPUs!

tf.config.list_physical_devices('GPU')


Information details of GPU

!nvidia-smi