Server Rack for Members

Wiki

The server rack supports 45 1U spaces.

Room Power

The outlets present in the Fishbowl are:
* NEMA 5-15R outlets (rated for 15A) connected to 20A breakers.

TODO:
* Check which outlets are on what breakers
* Discuss running proper power to the Fishbowl since other power-heavy events in the Fishbowl would be nice (ie: LAN party)

Current Rack Setup

(This setup documentation is temporary. Moving to a physical labeling / self-documenting approach would be preferred)

MaintainerInformationPower RequirementsPower Notes
Denhac3U – Shelf
Denhac – Networking1U – Unifi Switch 24 (24 Port) [Docs] [Datasheet]0.3A36W (25W “max power consumption”)
Denhac – Power1U – APC AP9211 MasterSwitch Power Distribution Unit (8 outlets) [Docs]
Note: MasterSwitch models AP9211 and AP9217 have NEMA 5-15 outlets.
Max Total Current Draw per Phase: 12A
Load Capacity: 1440 VA
Eddie1U – Dell PowerEdge R610 [Docs]4.2A – PSU1
4.2A – PSU2
2x PSU (Redundancy)
Stock PSU is 502W, High Output PSU is 717W
Installed is 502W
Eddie1U – Dell PowerEdge R610 [Docs]4.2A – PSU1
4.2A – PSU2
2x PSU (Redundancy)
Stock PSU is 502W, High Output PSU is 717W
Installed is 502W
Jax2U – Dell PowerEdge R810 [Docs]9.2A – PSU1
9.2A – PSU2
2x PSU (Redundancy)
Stock PSU is 1100W
55A per power supply max inrush current (10ms duration)
Jax2U – Dell PowerEdge R710 [Docs]4.8A – PSU1
4.8A – PSU2
2x PSU (Redundancy)
Stock PSU is 570W, Installed is 570W
(?)1U – Sun Sunfire x2270 [Manual]
Tilver1U – Dell PowerEdge 1950 [Docs]5.6A – PSU11x PSU Dell D670P-S0
Stock PSU is 670W (?) (Could not find docs)
Installed is
Agent2U – Dell PowerEdge R720xd [Docs]9.2A – PSU1
9.2A – PSU2
2x PSU (Redundancy)
Stock PSU is 750W, Installed is 1100W
Chris J2U – Dell PowerEdge R710 [Docs]7.3A1x PSU
Stock PSU is 570W, Installed is 870W
Denhac – Console1U – Dell PowerEdge 2161DS KVM Over IP [Docs]1.0A
Denhac – Console1U – Dell PowerEdge Rack Console 15FP [Docs]
Piano1U – Dell PowerEdge R620 [Docs]6.3A – PSU1
6.3A – PSU2
2x PSU (Redundancy)
Installed is 750W
Cody?HP NUC thing (todo?)0.6A65W Power Brick
Cody3U – EqualLogic PS Series 3000 (16 drive bay) [Manual]7.0A – PSU1
7.0A – PSU2
2x PSU – 840W
CodyHP BladeSystem C7000 Enclosure
BL460c Gen 8
BL460c G6

9 of 10 fan modules in back turned on
2 of 8 gigabit network cards, 4 sitting to the side
2x management cards
2.2A – Blade 1
2.2A – Blade 2
2.2A – Blade 3
2.2A – Blade 4
2.2A – Blade 5
No definitive answer, estimating with the following:
HP Blades ~260W each (2.2A each)
PDU seems ~12A max each port
Denhac – Power1U – (Missing Brain) APC AP9211 MasterSwitch Power Distribution Unit (8 outlets) [Docs]
Server Rack as of 2022-07-02

Proposed Member Server Rack Policy

Current Proposal: Proposed Member Server Rack Policy – Google Docs

Original Proposal: [READ ONLY, DO NOT USE] denhac member server rack policy – Google Docs

Open Questions / Feedback

“Members may rack a 2U server. Anything more will require approval and demonstrate added value to the space for other members.”

Would it make sense to document any exceptions on the wiki?  To document the value as well as special considerations for an abnormal rack setup. No worries if that sounds like it’s too much red tape to add to the MSRP. 

@sunzenshen in Slack

“Ports on the switch will be disabled by default. Post in #infrastructure for assistance.”

I would like to see the terraform config be the source of truth for what’s in the server rack. I think it’s reasonable to require all the setup steps for a server (including labeling, racking, etc) to be documented with the network policy change.

@isochronlabs

“Must provide own equipment i.e. rails, shelf, power & network cables, etc.”

I think this should become a checklist that’s part of the setup/onboarding process.

@isochronlabs

“If fans are too loud you will be asked to turn them down.”

I’ve already noticed a fan that produces a high pitch frequency that’s not “loud” but is perceived as loud due to the fletcher-munson curve. This might get resolved once we close the server rack up, but it’s something I’d like to call out as a loud fan producing white noise sounds better than a not-as-loud fan producing a distinct frequency. … It could also be a me problem due to my background in sound design/music production so I won’t be offended if we brush this off.

@isochronlabs

Would it make sense to allocate IP addresses based on their position in the rack? Might make troubleshooting easier in the future.

@isochronlabs

“The server rack is not a parking lot – if you are not utilizing your server please free the space for other members.”

I believe this should have defined criteria to avoid similar situations to the server rack when it was in the previous location. There are also difficulties in knowing when a server is being used or not, short of it being turned off/unplugged.

Would we have to measure utilization based on network traffic/connections / power consumption? It could possibly be handled by having a re-occurring poll that happens every (1-3 months?) to acknowledge it is being used. I can imagine the scenario where a member would say they’re still using it to avoid un-racking it, so it might be better to make the poll be something ad-hoc when a member asks for space and there isn’t enough room, it is negotiated with the members in the rack. I’d personally lean toward a missed parking lot violation on something than unnecessary process overhead when there isn’t much activity related to racking.

In the scenario that a member leaves their stuff, does it fall under the member-storage abandon policy? And after that, where do we put the servers and what happens after?

@isochronlabs

“Your machine may be subject to scrutiny for legal and security purposes – up to and including being powered off without notice.”

Who would be handling legal/security issues when they come up? Would this be something where the board would have to approach the group managing the rack, or would it be up to a consensus of group members to decide when it’s appropriate?

@isochronlabs

When rack maintenance needs to happen, such as when the fish bowl starts resolving power distribution problems… How should we approach this? A few options that come to mind ordered by …freedom

* Ask members to be present (in person or online?) when doing planned changes for network infrastructure, power, rack positioning, etc
* If a member isn’t present, detailing how to proceed <todo>
* Schedule a regular window of time every month for changes requiring servers going offline
* Consistently scheduled to make it predictable, but still requiring communication about changes beforehand
* Schedule an ad-hoc window of time with a notice of (?) days that is communicated to members
* Set an expectation that servers can be gracefully shutoff (power button) at anytime and be powered back on immediately after changes are completed
* IMO, I like this one to allow changes being done when volunteers are available. Exceptions to this policy could be granted.

After writing these up, I think there needs to be a mailing list for all members with equipment in the rack including people helping to maintain the rack.

@isochronlabs
Table of Contents