dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
12
share rss forum feed


leibold
Premium,MVM
join:2002-07-09
Sunnyvale, CA
kudos:10
Reviews:
·SONIC.NET
reply to Maxo

Re: Data recovery help

Please don't take this personal, but you have already made several big mistakes:

1.) When you receive filesystem related errors the first priority is to stop writing to that filesystem. Some data that is still in the buffer pool may yet be readable and therefore before shutting down attempt to save truly critical files (obviously not onto the damaged filesystem or any other filesystem on the same physical media; if possible use a network connection to copy those files to another system altogether).

2.) After shutting down a damaged disk (or in this case SD card) prevent any writing until all data that can be saved has been saved. Beware of unintentional writing that happens with some journalled filesystems even when mounting it read-only! Always use read-only mode when mounting media from which you want to recover data. At the very least you are putting the data at an unnecessary risk but far more likely you are increasing the already existing amount of data corruption. It is safer to do a copy of the raw disk partition then to trying to mount the damaged filesystem.

3.) I know some books say differently, but I say never run fsck with the -y option unless you have a good copy of the damaged filesystem (dd of raw partition) and you have determined the cause of the problem and either fixed it or determined that it longer prevents a repair of the filesystem.

4.) Never ever backup anything over the last known good backup. As you found out, if the backup fails your previously good backup is then gone too. That rule applies even if you haven't had any filesystem errors before starting the backup.

From the error messages it is clear that your media is damaged and at least block 2359734 can no longer be written to. My first action would be to get an SD Card with the same capacity and attempt a raw copy from the defect card to the new card (if you are really lucky all blocks are still readable even so some can no longer be written). If there are unreadable areas on the defect SD card you may have to restart dd with seek and skip options (or experiment with conv=noerrror).
Once you have a copy of your filesystem on good media you can attempt to repair the filesystem. Journal replay of the ext4 filesystem journal may actually recover some of the damages once it can write data from the journal to the proper filesystem block. Fsck should do the rest to restore integrity to the filesystem but it will do (almost) nothing to recover lost data (it does recover orphaned files and directories). It cannot fix data that was never written properly to begin with or that was somehow overwritten.

Good luck.
--
Got some spare cpu cycles ? Join Team Helix or Team Starfire!



Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

said by leibold:

Please don't take this personal, but you have already made several big mistakes:

No offense taken. You are absolutely correct on all points. I did not take the initial errors I got seriously enough. Also, my backup logic was not solid, which is what really made this inconvenience a disaster.
I have already tried making an image of the disk
➜  ~  sudo dd if=/dev/mmcblk0 of=/home/david/sdcard.img                                                                                                                                   
dd: reading `/dev/mmcblk0': Input/output error
4587520+0 records in
4587520+0 records out
2348810240 bytes (2.3 GB) copied, 264.791 s, 8.9 MB/s
 
Do you think that setting an identical sd card as the of would be beneficial? I used the only identical card I had to get the server back up. I was able to get all the code back up, all the data from last October, and some up-to-date data that gets synched to the registers, the membership data and inventory.
--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad


Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

I'm dding it with noerror right now to see what happens.


pablo
MVM
join:2003-06-23
kudos:1

Hi,

See lugnut's comment in this thread:

»system wont boot

It may be the ticket to recovery some of your data.

Cheers,
-pablo
--
openSUSE 12.2/KDE 4.x
ISP: TekSavvy Bonded DSL; backhauled via a 6KM wireless link
Assorted goodies: »pablo.blog.blueoakdb.com



leibold
Premium,MVM
join:2002-07-09
Sunnyvale, CA
kudos:10
Reviews:
·SONIC.NET
reply to Maxo

said by Maxo:

Do you think that setting an identical sd card as the of would be beneficial?

Yes, there is a chance you might get back some of the lost data back.

Your dd was done without specifying a blocksize which means you used the default of 512 bytes. Larger blocksizes would copy faster, but in an error recovery scenario you want the blocksize to be small so this is good (you could have used 1kB or whatever blocksize your filesyztem is using). In order to see whether the block where dd stopped is the same block as in the earlier filesystem error messages you need to do some calculations (don't forget that you are doing the dd on the entire blockdevice and the damaged filesystem is on the 2nd partition of that blockdevice). Assuming your ext4fs filesystem used a 1kB blocksize and your first partition on the SD card is about 64MB then the dd stopped at the same place as the filesystem error you posted earlier. With luck this is the only bad spot on the media.

Carefully check the messages produced by dd and the resulting output file. If the dd you are using is substituting 0 blocks for input blocks it can't read it is fine. However if the output is too short because dd didn't write anything while encountering a bad input block the copy is not usable for filesystem recovery. I seem to remember not all implementations of dd to behave identical in the presence of input errors which is why I prefer to use skip&seek (or iseek&oseek for dd versions that have those options) to eliminate the guesswork.
--
Got some spare cpu cycles ? Join Team Helix or Team Starfire!


Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

dd is still running this morning. It looks like this.

2348810240 bytes (2.3 GB) copied, 54020.2 s, 43.5 kB/s
dd: reading `/dev/mmcblk0': Input/output error
4587520+0 records in
4587520+0 records out
2348810240 bytes (2.3 GB) copied, 54021 s, 43.5 kB/s
dd: reading `/dev/mmcblk0': Input/output error
4587520+0 records in
4587520+0 records out
2348810240 bytes (2.3 GB) copied, 54021.8 s, 43.5 kB/s
dd: reading `/dev/mmcblk0': Input/output error
4587520+0 records in
4587520+0 records out
2348810240 bytes (2.3 GB) copied, 54022.5 s, 43.5 kB/s
dd: reading `/dev/mmcblk0': Input/output error
4587520+0 records in
4587520+0 records out
2348810240 bytes (2.3 GB) copied, 54023.3 s, 43.5 kB/s

--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad


leibold
Premium,MVM
join:2002-07-09
Sunnyvale, CA
kudos:10
Reviews:
·SONIC.NET

How big is the SD card ?
You can check where the dd is currently by sending it a SIGUSR1 signal: kill -USR1 PID# (where PID# is the process id of the running dd command).
If it is stuck at that block and doesn't continue beyond it, you will have to stop dd and determine by trial and error which the next readable block is (using the skip or iseek option of dd). If you have dd_rescue on your system use that instead since it will do all that work for you.
--
Got some spare cpu cycles ? Join Team Helix or Team Starfire!



Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

The cards 4GB. It is sitting at 2.3 GB copied.
I now have ddrescue installed. kill -USR1 is not giving me anything

~ sudo kill -USR1 3091
~

Do you think killing it and restarting with ddrescue is best? If so, what are the best options to use.
Right now I'm just putting the output to a .img file in my home directory. I'm going to order a few more SD cards this weekend from Newegg.
--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad


leibold
Premium,MVM
join:2002-07-09
Sunnyvale, CA
kudos:10
Reviews:
·SONIC.NET

1 recommendation

said by Maxo:

kill -USR1 is not giving me anything

~ sudo kill -USR1 3091
~

The output would be in the terminal where dd is running, not in the terminal where you send the USR1 signal.
said by Maxo:

Do you think killing it and restarting with ddrescue is best?

Given that dd is stuck, definitely use dd_rescue.

The option syntax is different from dd so check dd_rescue -h to get a list of them. Most defaults are fine, but I would recommend saving the list of bad blocks:

dd_rescue -o bad_block_list /dev/mmcblk0 /home/david/sdcard.img
--
Got some spare cpu cycles ? Join Team Helix or Team Starfire!


Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

1 recommendation

Now I'm getting somehwere ... I think.

sudo dd_rescue -o bad_block_list /dev/mmcblk0 /home/david/sdcard.img
dd_rescue: (info) expect to copy 3872256kB from /dev/mmcblk0
dd_rescue: (info): ipos:   2293760.0k, opos:   2293760.0k, xferd:   2293760.0k
                *  errs:      0, errxfer:         0.0k, succxfer:   2293760.0k
             +curr.rate:        0kB/s, avg.rate:     8634kB/s, avg.load:  2.0%
             >------------------------.................<  59%  ETA:  0:03:02 
dd_rescue: (warning): read /dev/mmcblk0 (2293760.0k): Success!
 
dd_rescue: (info): ipos:   2293760.5k, opos:   2293760.5k, xferd:   2293760.5k
                *  errs:      1, errxfer:         0.5k, succxfer:   2293760.0k
             +curr.rate:        1kB/s, avg.rate:     8609kB/s, avg.load:  2.0%
             >-----------------------x.................<  59%  ETA:  0:03:03 
dd_rescue: (warning): read /dev/mmcblk0 (2293760.5k): Success!
 
dd_rescue: (info): ipos:   2293761.0k, opos:   2293761.0k, xferd:   2293761.0k
                *  errs:      2, errxfer:         1.0k, succxfer:   2293760.0k
             +curr.rate:        1kB/s, avg.rate:     8584kB/s, avg.load:  2.0%
             >-----------------------x.................<  59%  ETA:  0:03:03 
dd_rescue: (warning): read /dev/mmcblk0 (2293761.0k): Success!
 
dd_rescue: (info): ipos:   2293761.5k, opos:   2293761.5k, xferd:   2293761.5k
                *  errs:      3, errxfer:         1.5k, succxfer:   2293760.0k
             +curr.rate:        1kB/s, avg.rate:     8559kB/s, avg.load:  2.0%
             >-----------------------x.................<  59%  ETA:  0:03:04 
dd_rescue: (warning): read /dev/mmcblk0 (2293761.5k): Success!
 
dd_rescue: (info): ipos:   2293762.0k, opos:   2293762.0k, xferd:   2293762.0k
                *  errs:      4, errxfer:         2.0k, succxfer:   2293760.0k
             +curr.rate:        1kB/s, avg.rate:     8535kB/s, avg.load:  2.0%
             >-----------------------x.................<  59%  ETA:  0:03:04 
dd_rescue: (warning): read /dev/mmcblk0 (2293762.0k): Success!
 
dd_rescue: (info): ipos:   2293762.5k, opos:   2293762.5k, xferd:   2293762.5k
                *  errs:      5, errxfer:         2.5k, succxfer:   2293760.0k
             +curr.rate:        1kB/s, avg.rate:     8511kB/s, avg.load:  2.0%
             >-----------------------x.................<  59%  ETA:  0:03:05 
dd_rescue: (warning): read /dev/mmcblk0 (2293762.5k): Success!
 
 

--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad


leibold
Premium,MVM
join:2002-07-09
Sunnyvale, CA
kudos:10
Reviews:
·SONIC.NET

5 consecutive bad sectors. Lets hope that this is all there is. Remember that data for one of the blocks (2 sectors) at least is present in the ext4 filesystem journal. Once the SD card data is copied to a working media, mounting the filesystem will hopefully recover that block from the journal replay.

Most of your data ought to be intact (2.5kB out of 4GB is next to nothing, but of course not every block is as important as the other one).
--
Got some spare cpu cycles ? Join Team Helix or Team Starfire!



leibold
Premium,MVM
join:2002-07-09
Sunnyvale, CA
kudos:10
Reviews:
·SONIC.NET
reply to Maxo

You don't have to wait until you get a new 4GB SD Card to recover the data.

Once dd_rescue is finished (I hope it long since has) and if you have the space for it, make a 2nd copy for your recovery attempts. Turn the 2nd image into a block device with the use of loopback devices (see losetup) and mount the 2nd partition (ext4fs). If you don't know that starting offset for the 2nd partition create a loop device for the entire SD Card and use fdisk to read the partition table (be careful to not mix 512 byte sectors and 1kB blocks).
--
Got some spare cpu cycles ? Join Team Helix or Team Starfire!



Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

Here's the final details of dd_resuce.

dd_rescue: (info): read /dev/mmcblk0 (3872256.0k): EOF
dd_rescue: (info): Summary for /dev/mmcblk0 -> /home/david/sdcard.img:
dd_rescue: (info): ipos:   3872256.0k, opos:   3872256.0k, xferd:   3872256.0k
                   errs: 286720, errxfer:    143360.0k, succxfer:   3728896.0k
             +curr.rate:      303kB/s, avg.rate:       17kB/s, avg.load:  0.1%
             >-----------------------xxx--------------.<  99%  ETA:  0:00:00
 
It ran through the weekend while I was out of town.

--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad


Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

One thing I didn't consider is that I do not know how to mount a single partition from a .img file of a whole disk.



Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

Some Googling and I did this.

root@HP:/home/baucumd# sudo losetup /dev/loop0 sdcard.img -o $((75497472))
root@HP:/home/baucumd# mkdir /media/sdcard
root@HP:/home/baucumd# fsck -fv /dev/loop0
fsck from util-linux 2.20.1
e2fsck 1.42 (29-Nov-2011)
Pass 1: Checking inodes, blocks, and sizes
Inode 90322 has an invalid extent node (blk 2327776, lblk 3855)
Clear<y>? yes
 
Inode 90322, i_blocks is 11186, should be 7818.  Fix<y>? yes
 
HTREE directory inode 278684 has an invalid root node.
Clear HTree index<y>? yes
 
HTREE directory inode 278787 has an invalid root node.
Clear HTree index<y>? yes
...
...
/dev/loop0: ***** FILE SYSTEM WAS MODIFIED *****
 
   74314 inodes used (15.64%)
     598 non-contiguous files (0.8%)
     346 non-contiguous directories (0.5%)
         # of inodes with ind/dind/tind blocks: 0/0/0
         Extent depth histogram: 67135/184/1
 2306852 blocks used (60.73%)
       0 bad blocks
       0 large files
 
   58220 regular files
    7957 directories
      56 character device files
      25 block device files
       0 fifos
4294967170 links
    8046 symbolic links (6904 fast symbolic links)
       1 socket
--------
   74133 files
 
 

--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad


Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

1 edit

I'm pretty pumped as all of the mysql files are now readable and appear to be in tact. It will be a while before I have the opportunity to actually try restoring those files and seeing if they work.
If this works I'm shipping some beers out your way leibold See Profile.



leibold
Premium,MVM
join:2002-07-09
Sunnyvale, CA
kudos:10
Reviews:
·SONIC.NET

said by Maxo:

If this works I'm shipping some beers out your way leibold See Profile.

Better not, I don't drink

The file with inode number 90322 got truncated during the fsck. To identify which file this was, you can use the find command after mounting the filesystem. E.g.:

mount -r /dev/loop0 /mnt
find /mnt -inum 90322 -print

It is also possible to determine which files have been corrupted due to the defects in the sdcard by checking the list of badblocks that dd_rescue reported. This is a bit more involved and you can find some related information here . Note1: you don't need to run any programs to find bad blocks since you already have the bad block list (running badblocks on the copy wouldn't find any). Note2: in addition to the calculation regarding block sizes you also need to subtract the offset of the 2nd partition.
--
Got some spare cpu cycles ? Join Team Helix or Team Starfire!


Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

I got my server back up and running and setup mysql with my October backup.
From my recovery image I pulled out /var/lib/mysql and put it on the server but mysql won't start up. So I put back in the database from my restore and tried adding in just /var/lib/mysql/gnucash, a schema that I added a week before the crash.
With that in place I can see it from "show schemas" and when I run "show tables" it lists all of the gnucash tables. However accessing the tables doesn't go so well.

mysql> use gnucash;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+-------------------+
| Tables_in_gnucash |
+-------------------+
| accounts |
| billterms |
| books |
| budget_amounts |
| budgets |
| commodities |
| customers |
| employees |
| entries |
| gnclock |
| invoices |
| jobs |
| lots |
| orders |
| prices |
| recurrences |
| schedxactions |
| slots |
| splits |
| taxtable_entries |
| taxtables |
| transactions |
| vendors |
| versions |
+-------------------+
24 rows in set (0.00 sec)

mysql> select * from accounts;
ERROR 1146 (42S02): Table 'gnucash.accounts' doesn't exist
mysql> select * from transactions;
ERROR 1146 (42S02): Table 'gnucash.transactions' doesn't exist
mysql> select * from prices;
ERROR 1146 (42S02): Table 'gnucash.prices' doesn't exist
mysql> select * from vendors;
ERROR 1033 (HY000): Incorrect information in file: './gnucash/vendors.frm'
mysql>

At this point do you think it best to move the problem over to »Web Developers ?
--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad


Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

Another bit of information that may be useful.

root@butter:/var/lib/mysql/gnucash# ls -l
total 634
-rw-rw---- 1 mysql mysql 33550 Feb 10 09:42 accounts.frm
-rw-rw---- 1 mysql mysql 9184 Jan 29 20:10 accounts.MYD
-rw-rw---- 1 mysql mysql 5120 Jan 29 20:10 accounts.MYI
-rw-rw---- 1 mysql mysql 25372 Feb 10 09:42 billterms.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 billterms.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 billterms.MYI
-rw-rw---- 1 mysql mysql 8674 Feb 10 09:42 books.frm
-rw-rw---- 1 mysql mysql 104 Jan 29 20:10 books.MYD
-rw-rw---- 1 mysql mysql 2048 Jan 29 20:10 books.MYI
-rw-rw---- 1 mysql mysql 8776 Feb 10 09:42 budget_amounts.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 budget_amounts.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 budget_amounts.MYI
-rw-rw---- 1 mysql mysql 20966 Feb 10 09:42 budgets.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 budgets.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 budgets.MYI
-rw-rw---- 1 mysql mysql 45736 Feb 10 09:42 commodities.frm
-rw-rw---- 1 mysql mysql 84 Jan 29 20:10 commodities.MYD
-rw-rw---- 1 mysql mysql 2048 Jan 29 20:10 commodities.MYI
-rw-rw---- 1 mysql mysql 58990 Feb 10 09:42 customers.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 customers.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 customers.MYI
-rw-rw---- 1 mysql mysql 61 Feb 10 09:42 db.opt
-rw-rw---- 1 mysql mysql 50314 Feb 10 09:42 employees.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 employees.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 employees.MYI
-rw-rw---- 1 mysql mysql 38538 Feb 10 09:42 entries.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 entries.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 entries.MYI
-rw-rw---- 1 mysql mysql 8596 Feb 10 09:42 gnclock.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 gnclock.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 gnclock.MYI
-rw-rw---- 1 mysql mysql 25626 Feb 10 09:42 invoices.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 invoices.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 invoices.MYI
-rw-rw---- 1 mysql mysql 25158 Feb 10 09:42 jobs.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 jobs.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 jobs.MYI
-rw-rw---- 1 mysql mysql 8646 Feb 10 09:42 lots.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 lots.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 lots.MYI
-rw-rw---- 1 mysql mysql 25248 Feb 10 09:42 orders.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 orders.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 orders.MYI
-rw-rw---- 1 mysql mysql 21124 Feb 10 09:42 prices.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 prices.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 prices.MYI
-rw-rw---- 1 mysql mysql 12876 Feb 10 09:42 recurrences.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 recurrences.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 recurrences.MYI
-rw-rw---- 1 mysql mysql 13206 Feb 10 09:42 schedxactions.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 schedxactions.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 schedxactions.MYI
-rw-rw---- 1 mysql mysql 33596 Feb 10 09:42 slots.frm
-rw-rw---- 1 mysql mysql 732 Jan 29 20:10 slots.MYD
-rw-rw---- 1 mysql mysql 3072 Jan 29 20:10 slots.MYI
-rw-rw---- 1 mysql mysql 21314 Feb 10 09:42 splits.frm
-rw-rw---- 1 mysql mysql 296 Jan 29 20:10 splits.MYD
-rw-rw---- 1 mysql mysql 4096 Jan 29 20:10 splits.MYI
-rw-rw---- 1 mysql mysql 8748 Feb 10 09:42 taxtable_entries.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 taxtable_entries.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 taxtable_entries.MYI
-rw-rw---- 1 mysql mysql 8702 Feb 10 09:42 taxtables.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 taxtables.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 taxtables.MYI
-rw-rw---- 1 mysql mysql 21050 Feb 10 09:42 transactions.frm
-rw-rw---- 1 mysql mysql 96 Jan 29 20:10 transactions.MYD
-rw-rw---- 1 mysql mysql 3072 Jan 29 20:10 transactions.MYI
-rw-rw---- 1 mysql mysql 50164 Feb 10 09:42 vendors.frm
-rw-rw---- 1 mysql mysql 0 Jan 29 20:10 vendors.MYD
-rw-rw---- 1 mysql mysql 1024 Jan 29 20:10 vendors.MYI
-rw-rw---- 1 mysql mysql 8620 Feb 10 09:42 versions.frm
-rw-rw---- 1 mysql mysql 504 Jan 29 20:10 versions.MYD
-rw-rw---- 1 mysql mysql 2048 Jan 29 20:10 versions.MYI


--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad

pablo
MVM
join:2003-06-23
kudos:1
reply to Maxo

Hi,

At this point, you'll want to scour the web/Internet on how to scrape data from a corrupted MySQL schema.

There may be some MySQL utilities which can be used to attempt a repair of the data files too.

Cheers,
-pablo
--
openSUSE 12.2/KDE 4.x
ISP: TekSavvy Bonded DSL; backhauled via a 6KM wireless link
Assorted goodies: »pablo.blog.blueoakdb.com



Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

1 recommendation

said by pablo:

Hi,

At this point, you'll want to scour the web/Internet on how to scrape data from a corrupted MySQL schema.

There may be some MySQL utilities which can be used to attempt a repair of the data files too.

Cheers,
-pablo

I've done some looking around. I move from trying to repair the gnucash database, which is completely disposable, to working on is4c_op, which is an important schema. I was able to repair all tables except for the member_payments table.
Most of the tables where just fine and didn't need any repair.
Some table where fixed by running "repair table use_frm;"
Some tables where fixable by copying .frm from the good restore and plopping it and then running the repair command.
I've got one more important schema to try to repair, is4c_log.
--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad


Maxo
Your tax dollars at work.
Premium,VIP
join:2002-11-04
Tallahassee, FL

I'm happy to report that I appear to have been able to successfully recover all tables by piecing together the binary files from the restored disk image and the mysqldump export from last October.
I have two more steps before I can move on. I need to make the server live and I need to develop a proper backup mechanism so that I'm not in this rut again the next time I run into problems.
I'd really like to thank everyone that stopped by and gave suggestions. I only do this work as a volunteer and my main knowledge is in coding, not server administration. I would not have been able to get a full recovery without your great advice.
--
"Padre, nobody said war was fun now bowl!" - Sherman T Potter

»maxolasersquad.com/

»maxolasersquad.blogspot.com

»www.facebook.com/maxolasersquad



leibold
Premium,MVM
join:2002-07-09
Sunnyvale, CA
kudos:10
Reviews:
·SONIC.NET

3 edits

1 recommendation

A nice piece of set-it-up-and-forget-it remote backup software is rsnapshot.

Since you are using mysql databases I would combine a local mysqldump (easier to recover from then binary database files) with rsnapshot for frequent full system backups. Don't let "full system backup" scare you, with rsnapshot only the first backup will take a lot of time since all files will need to be transferred from the database server to your backup destination. All subsequent backups only transmit new or changed data. Another nice feature of rsnapshot is that it heavily uses hard-links. This makes each snapshot a full and complete copy of the backed up server while keeping total disk utilization for all snapshots very small (size of 1 full backup plus the sum of all new and changed files).

For this suggestion:
1.) run rsyncd on the database server (locked down to only be accessible from the backup server, run as root, no chroot to allow backup of files with restricted access, read-only mode for added security since that is all that is needed for backup purposes), ideally start it automatically on boot (e.g. init.d).
2.) periodically run mysqldump on the database server (e.g. from cron so that nobody forgets it). It is fine if it always backs up to the same destination on the local disk.
3.) run rsnapshot on the backup server to backup the entire database server (use daily, weekly and monthly cron jobs as desired).Add 'exclude' directives to rsnapshot.conf for virtual filesystems (/dev, /proc, /sys) or anything you really don't want to back up. I also added '--bwlimit=...' to the rsync arguments inside rsnapshot.conf which reduces the impact of backups taken on always up production servers.

Edit: before I get a lot of flak for the "forget-it" part, I specially mean the low maintenance aspect of rsnapshot once it is configured. It doesn't hurt to periodically check rsnapshot.log to see that everything is going smooth just like many other logfiles ought to be periodically reviewed (before warnings turn into fatal errors).

Edit: some additional details added in italic.
--
Got some spare cpu cycles ? Join Team Helix or Team Starfire!