Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Hardware & Devices
  4. ...need help recovering RAID5

...need help recovering RAID5

Scheduled Pinned Locked Moved Hardware & Devices
algorithmsdata-structuresjsonhelpquestion
1 Posts 1 Posters 2 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L Offline
    L Offline
    Lost User
    wrote on last edited by
    #1

    UPDATE I did check config and not sure if I am looking at correct file - please note it is "autogenerated" and DOES not contain expected RAID information. mdadm.conf # # !NB! Run update-initramfs -u after updating this file. # !NB! This will ensure that initramfs has an uptodate copy. # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # automatically tag new arrays as belonging to the local system HOMEHOST # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 UUID=193016b6:797aeb14:73cda28e:23dc94cf name=q5-desktop:0 # This configuration was auto-generated on Fri, 25 Nov 2022 18:55:51 -0600 by mkconf Would somebody here be interested helping me to recover inaccessible RAID5 - software RAID ? Here is a partial copy of ONE of my attempts to recover what appears to be result of unfortunate power failure while RAID5 was being used / updated . The failed RAID mdstat is highlighted, the rest just FYI. nov25-1@nov251-desktop:~$ sudo cat /proc/mdstat [sudo] password for nov25-1: Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md121 : inactive sde32[1](S) 20462592 blocks super 1.2 md122 : inactive sde24[0](S) sde25[1](S) 204666880 blocks super 1.2 md123 : active raid5 sde35[4](S) sde29[3] sde28[1] sde6[0] 10229760 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] md124 : inactive sde34[1](S) sde33[0](S) sda19[3](S) 217469952 blocks super 1.2 md125 : inactive sdf9[3](S) sde27[1](S) sda4[4](S) 511603712 blocks super 1.2 md0 : inactive sda17[1](S) 204667904 blocks super 1.2 md126 : active (auto-read-only) raid6 sde18[4] sde26[3] sda13[0] 189564928 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [U_UU] md127 : active raid5 sdc3[0] sde12[4] sda6[1] sdb2[3] 307000320 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] unused devices: nov25-1@nov251-desktop:~$ nov25-1@nov251-desktop:~$ sudo mdadm --stop /dev/md124 [sudo] password for nov25-1: mdadm: stopped /dev/md124 nov25-1@nov251-desktop:~$ sudo mdadm -A --force /dev/md

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Recent
    • Tags
    • Popular
    • World
    • Users
    • Groups