This is a question from Chat GPT and Sebastain's book on large language models from scratch. First, this data is interesting. 124 Million parameters GPT_CONFIG_124M = { "vocab_size": 50257, # Vocabulary size "context_length": 1024, # Context length "emb_dim": 768, # Embedding dimension "n_heads": 12, # Number of attention heads "n_layers": 12, # Number of layers "drop_rate": 0.1, # Dropout rate "qkv_bias": False # Query-Key-Value bias } The 1.5 billion parameter GPT model config GPT_CONFIG_1558M = { "vocab_size": 50257, # Vocabulary size "context_length": 1024, # Context length "emb_dim": 1600, # ...
So I was testing setup of a hadoop cluster and running map reduce. There are preliminary notes in first post. One key, I ran all commands under openssh as that user Setup the user Main machine: sudo usermod --shell /bin/bash mainhdfs Start openssh openssh su - mainhdfs sudo apt-get install openssh-server sudo systemctl enable ssh sudo systemctl enable ssh --now sudo systemctl start ssh su - mainhdfs sudo mkdir /usr/local/hadoop sudo mv hadoop-3.3.6 /usr/local/hadoop/ sudo chown own -R mainhdfs:hadoop /usr/local/hadoop sudo chown -R mainhdfs:hadoop /usr/local/hadoop Setup more on ssh: ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys chmod 700 ~/.ssh Write the following .bashrc and .bash_profile for that user export HADOOP_HOME=/usr/local/hadoop/hadoop-3.3.6 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export HADOOP_CONF_...
Comments
Post a Comment