[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Fwd: status of the Richmond cluster and thanks to Steven James]



Jerry,

The answers are bellow.

-----Original Message-----
From: gilfoyle [mailto:ggilfoyl@richmond.edu]
Sent: Thursday, December 05, 2002 3:36 PM
To: Sasko Stafanovski
Subject: [Fwd: status of the Richmond cluster and thanks to Steven
James]

>   attached is the latest status report on the cluster which appears
>to be working! thanks for all your help in getting the cluster working.
>since that problem is fixed, there are some administrative type things
>i want to work in the next few weeks and need your help.

Not a problem.

>1. before the upgrade i had set up my account on pscm1 with the
>same uid as my account on gpg2. this enabled me to access my
>'pscm1' files from gpg2 transparently. since the upgrade this is
>no longer true and i cannot edit and delete files on pscm1 from
>gpg2. what is the best way to fix this? i could change my uid
>on gpg2 to match the one on pscm1. is there a more elegant way
>to do this? i want my files on pscm1 to be accessible on by me
>and no one else.

I do not have an account on gpg2. Probably that is your personal desktop?
Since you export from pscm1 and mount on your desktop, it is best to change
the uid on gpg2.
I don't know of any more elegant way to do this. 
It's simple: 
Do on gpg2:
   - change your uid in /etc/passwd
   - chown -R <your_home_directory>
   - logout/login again

You can control the access of your home directory on pscm1. If you set the
permissions so that only  you can rwx, nobody else can (theoretically)

>2. when i am on mfv1 and i try to ssh to gpg2 i get the following
>message.

>physxcd:gilfoyle> ssh gpg2
>ssh_exchange_identification: Connection closed by remote host

>i tried various fixes and none work. do you have any ideas?

The best guess, an upgrade of openssh. 
Tell me what version you're running? Can't check w/o an access...

>3. node 8 in the cluster is still dead. can you try to resurrect
>it? if you can't we should send it back to linuxlabs to get fixed
>or replaced.

I tried to do what James have said, but unsuccessfully. I'll send him and
e-mail.

>4. node 48 in the cluster is also dead. can you try to resurrect
>it? if you can't we should send it back with node 8.

What is node 48? Isn't it the new secondary master?

>5. we got money from the university faculty research committee to
>purchase some new nodes to add to the cluster. this would involve
>removing some of the nodes from the old cluster (psc1) so we can 
>use the rack for the new ones. we should talk in the next week
>or two to plan this move.

I am available.